diff mbox series

icount: make dma reads deterministic

Message ID 158315399043.847.4021939910752786131.stgit@pasha-Precision-3630-Tower
State New
Headers show
Series icount: make dma reads deterministic | expand

Commit Message

Pavel Dovgalyuk March 2, 2020, 12:59 p.m. UTC
Windows guest sometimes makes DMA requests with overlapping
target addresses. This leads to the following structure of iov for
the block driver:

addr size1
addr size2
addr size3

It means that three adjacent disk blocks should be read into the same
memory buffer. Windows does not expects anything from these bytes
(should it be data from the first block, or the last one, or some mix),
but uses them somehow. It leads to non-determinism of the guest execution,
because block driver does not preserve any order of reading.

This situation was discusses in the mailing list at least twice:
https://lists.gnu.org/archive/html/qemu-devel/2010-09/msg01996.html
https://lists.gnu.org/archive/html/qemu-devel/2020-02/msg05185.html

This patch makes such disk reads deterministic in icount mode.
It skips SG parts that were already affected by prior reads
within the same request. Parts that are non identical, but are just
overlapped, are trimmed.

Examples for different SG part sequences:

1)
A1 1000
A1 1000
->
A1 1000

2)
A1 1000
A2 1000
A1 1000
A3 1000
->
Two requests with different offsets, because second A1/1000 should be skipped.
A1 1000
A2 1000
--
A3 1000

3)
A1 800
A2 1000
A1 1000
->
First 800 bytes of third SG are skipped.
A1 800
A2 1000
--
A1+800 800

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
---
 dma-helpers.c |   57 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 53 insertions(+), 4 deletions(-)

Comments

Kevin Wolf March 2, 2020, 4:19 p.m. UTC | #1
Am 02.03.2020 um 13:59 hat Pavel Dovgalyuk geschrieben:
> Windows guest sometimes makes DMA requests with overlapping
> target addresses. This leads to the following structure of iov for
> the block driver:
> 
> addr size1
> addr size2
> addr size3
> 
> It means that three adjacent disk blocks should be read into the same
> memory buffer. Windows does not expects anything from these bytes
> (should it be data from the first block, or the last one, or some mix),
> but uses them somehow. It leads to non-determinism of the guest execution,
> because block driver does not preserve any order of reading.
> 
> This situation was discusses in the mailing list at least twice:
> https://lists.gnu.org/archive/html/qemu-devel/2010-09/msg01996.html
> https://lists.gnu.org/archive/html/qemu-devel/2020-02/msg05185.html
> 
> This patch makes such disk reads deterministic in icount mode.
> It skips SG parts that were already affected by prior reads
> within the same request. Parts that are non identical, but are just
> overlapped, are trimmed.
> 
> Examples for different SG part sequences:
> 
> 1)
> A1 1000
> A1 1000
> ->
> A1 1000
> 
> 2)
> A1 1000
> A2 1000
> A1 1000
> A3 1000
> ->
> Two requests with different offsets, because second A1/1000 should be skipped.
> A1 1000
> A2 1000
> --
> A3 1000

How is the "--" line represented in the code?

> 3)
> A1 800
> A2 1000
> A1 1000
> ->
> First 800 bytes of third SG are skipped.
> A1 800
> A2 1000
> --
> A1+800 800
> 
> Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
> ---
>  dma-helpers.c |   57 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 53 insertions(+), 4 deletions(-)
> 
> diff --git a/dma-helpers.c b/dma-helpers.c
> index e8a26e81e1..d71512f707 100644
> --- a/dma-helpers.c
> +++ b/dma-helpers.c
> @@ -13,6 +13,7 @@
>  #include "trace-root.h"
>  #include "qemu/thread.h"
>  #include "qemu/main-loop.h"
> +#include "sysemu/cpus.h"
>  
>  /* #define DEBUG_IOMMU */
>  
> @@ -139,17 +140,65 @@ static void dma_blk_cb(void *opaque, int ret)
>      dma_blk_unmap(dbs);
>  
>      while (dbs->sg_cur_index < dbs->sg->nsg) {
> +        bool skip = false;
>          cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
>          cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
> -        mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir);
> -        if (!mem)
> -            break;
> -        qemu_iovec_add(&dbs->iov, mem, cur_len);
> +
> +        /*
> +         * Make reads deterministic in icount mode.
> +         * Windows sometimes issues disk read requests with
> +         * overlapping SGs. It leads to non-determinism, because
> +         * resulting buffer contents may be mixed from several
> +         * sectors.
> +         * This code crops SGs that were already read in this request.
> +         */

Please make use of the full line length for the commit text, and add
empty lines between paragraphs.

> +        if (use_icount
> +            && dbs->dir == DMA_DIRECTION_FROM_DEVICE) {

This fits in a single line.

> +            int i;
> +            for (i = 0 ; i < dbs->sg_cur_index ; ++i) {
> +                if (dbs->sg->sg[i].base <= cur_addr
> +                    && dbs->sg->sg[i].base + dbs->sg->sg[i].len > cur_addr) {

This is range_covers_byte(dbs->sg->sg[i].base, dbs->sg->sg[i].len, cur_addr).

> +                    cur_len = MIN(cur_len,
> +                        dbs->sg->sg[i].base + dbs->sg->sg[i].len - cur_addr);

cur_len is now the number of bytes have are not covered by sg[i] yet.
cur_addr is unchanged. It's not used after this, so I guess this
inconsistency is acceptable.

> +                    skip = true;
> +                    break;
> +                } else if (cur_addr <= dbs->sg->sg[i].base
> +                    && cur_addr + cur_len > dbs->sg->sg[i].base) {

This is range_covers_byte(cur_addr, cur_len, dbs->sg->sg[i].base).

> +                    cur_len = dbs->sg->sg[i].base - cur_addr;

cur_len is again the number of bytes not covered by sg[i]. cur_addr is
unchanged, but actually makes sense in this branch.

> +                    break;
> +                }
> +            }
> +        }
> +
> +        assert(cur_len);

What stops a guest from adding a zero-length entry?

> +        if (!skip) {
> +            mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir);
> +            if (!mem)
> +                break;
> +            qemu_iovec_add(&dbs->iov, mem, cur_len);

The if statement requires braces.

You're adding a possibly shortened iovec entry here, and then you
continue to add more iovec entries after it. This is the part where I
think the "--" entries from the commit message are missing.

dbs->io_func doesn't know that some part in the middle of the request is
missing, so you'll operate on the wrong data.

> +        } else {
> +            if (dbs->iov.size != 0) {
> +                break;
> +            }

Ok, if we already had something before the "hole", this just splits the
request in two.

How is it possible for dbs->iov.size to be 0, when we just found out
that it overlaps with something?

> +            /* skip this SG */
> +            dbs->offset += cur_len;

Ok, if we didn't have anything before it (for whatever reason), we can
just start the request later.

> +        }
> +
>          dbs->sg_cur_byte += cur_len;
>          if (dbs->sg_cur_byte == dbs->sg->sg[dbs->sg_cur_index].len) {
>              dbs->sg_cur_byte = 0;
>              ++dbs->sg_cur_index;
>          }
> +
> +        /*
> +         * All remaining SGs were skipped.
> +         * This is not reschedule case, because we already
> +         * performed the reads, and the last SGs were skipped.
> +         */
> +        if (dbs->sg_cur_index == dbs->sg->nsg && dbs->iov.size == 0) {
> +            dma_complete(dbs, ret);
> +            return;
> +        }
>      }

I think the concept of skipping SG list entries makes this patch
relatively complex. Maybe one of these would work better:

1. Instead of skipping, add a temporary bounce buffer to the iovec.

2. Instead of skipping, just exit the loop and effectively split the
   request in multiple parts (like you already do in one case). Then the
   memory will still be written to twice, but deterministically so that
   the later SG list entry always wins.

I think 2. sounds quite attractive because you don't have to manage any
additional state. You can even simplify the loop to use ranges_overlap()
instead of two separate cases: No matter which way the ranges overlap,
you just execute the first part and continue with the overlapping part
in the next dma_blk_cb() callback.

Kevin
Pavel Dovgalyuk March 3, 2020, 12:31 p.m. UTC | #2
Kevin Wolf писал 2020-03-02 19:19:
> Am 02.03.2020 um 13:59 hat Pavel Dovgalyuk geschrieben:
>> Windows guest sometimes makes DMA requests with overlapping
>> target addresses. This leads to the following structure of iov for
>> the block driver:
>> 
>> addr size1
>> addr size2
>> addr size3
>> 
>> It means that three adjacent disk blocks should be read into the same
>> memory buffer. Windows does not expects anything from these bytes
>> (should it be data from the first block, or the last one, or some 
>> mix),
>> but uses them somehow. It leads to non-determinism of the guest 
>> execution,
>> because block driver does not preserve any order of reading.
>> 
>> This situation was discusses in the mailing list at least twice:
>> https://lists.gnu.org/archive/html/qemu-devel/2010-09/msg01996.html
>> https://lists.gnu.org/archive/html/qemu-devel/2020-02/msg05185.html
>> 
>> This patch makes such disk reads deterministic in icount mode.
>> It skips SG parts that were already affected by prior reads
>> within the same request. Parts that are non identical, but are just
>> overlapped, are trimmed.
>> 
>> Examples for different SG part sequences:
>> 
>> 1)
>> A1 1000
>> A1 1000
>> ->
>> A1 1000
>> 
>> 2)
>> A1 1000
>> A2 1000
>> A1 1000
>> A3 1000
>> ->
>> Two requests with different offsets, because second A1/1000 should be 
>> skipped.
>> A1 1000
>> A2 1000
>> --
>> A3 1000
> 
> How is the "--" line represented in the code?
> 
>> 3)
>> A1 800
>> A2 1000
>> A1 1000
>> ->
>> First 800 bytes of third SG are skipped.
>> A1 800
>> A2 1000
>> --
>> A1+800 800
>> 
>> Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
>> ---
>>  dma-helpers.c |   57 
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++----
>>  1 file changed, 53 insertions(+), 4 deletions(-)
>> 
>> diff --git a/dma-helpers.c b/dma-helpers.c
>> index e8a26e81e1..d71512f707 100644
>> --- a/dma-helpers.c
>> +++ b/dma-helpers.c
>> @@ -13,6 +13,7 @@
>>  #include "trace-root.h"
>>  #include "qemu/thread.h"
>>  #include "qemu/main-loop.h"
>> +#include "sysemu/cpus.h"
>> 
>>  /* #define DEBUG_IOMMU */
>> 
>> @@ -139,17 +140,65 @@ static void dma_blk_cb(void *opaque, int ret)
>>      dma_blk_unmap(dbs);
>> 
>>      while (dbs->sg_cur_index < dbs->sg->nsg) {
>> +        bool skip = false;
>>          cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + 
>> dbs->sg_cur_byte;
>>          cur_len = dbs->sg->sg[dbs->sg_cur_index].len - 
>> dbs->sg_cur_byte;
>> -        mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, 
>> dbs->dir);
>> -        if (!mem)
>> -            break;
>> -        qemu_iovec_add(&dbs->iov, mem, cur_len);
>> +
>> +        /*
>> +         * Make reads deterministic in icount mode.
>> +         * Windows sometimes issues disk read requests with
>> +         * overlapping SGs. It leads to non-determinism, because
>> +         * resulting buffer contents may be mixed from several
>> +         * sectors.
>> +         * This code crops SGs that were already read in this 
>> request.
>> +         */
> 
> Please make use of the full line length for the commit text, and add
> empty lines between paragraphs.

Ok

> 
>> +        if (use_icount
>> +            && dbs->dir == DMA_DIRECTION_FROM_DEVICE) {
> 
> This fits in a single line.

Ok

>> +        }
>> +
>>          dbs->sg_cur_byte += cur_len;
>>          if (dbs->sg_cur_byte == dbs->sg->sg[dbs->sg_cur_index].len) {
>>              dbs->sg_cur_byte = 0;
>>              ++dbs->sg_cur_index;
>>          }
>> +
>> +        /*
>> +         * All remaining SGs were skipped.
>> +         * This is not reschedule case, because we already
>> +         * performed the reads, and the last SGs were skipped.
>> +         */
>> +        if (dbs->sg_cur_index == dbs->sg->nsg && dbs->iov.size == 0) 
>> {
>> +            dma_complete(dbs, ret);
>> +            return;
>> +        }
>>      }
> 
> I think the concept of skipping SG list entries makes this patch
> relatively complex. Maybe one of these would work better:
> 
> 1. Instead of skipping, add a temporary bounce buffer to the iovec.
> 
> 2. Instead of skipping, just exit the loop and effectively split the
>    request in multiple parts (like you already do in one case). Then 
> the
>    memory will still be written to twice, but deterministically so that
>    the later SG list entry always wins.
> 
> I think 2. sounds quite attractive because you don't have to manage any
> additional state. You can even simplify the loop to use 
> ranges_overlap()

Thanks for this idea. Please check the new version.
I didn't find how to check SG addresses without making the comparisons 
too complex
and without storing extra data. Therefore I pass iov pointers directly 
to ranges_overlap().


Pavel Dovgalyuk
diff mbox series

Patch

diff --git a/dma-helpers.c b/dma-helpers.c
index e8a26e81e1..d71512f707 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -13,6 +13,7 @@ 
 #include "trace-root.h"
 #include "qemu/thread.h"
 #include "qemu/main-loop.h"
+#include "sysemu/cpus.h"
 
 /* #define DEBUG_IOMMU */
 
@@ -139,17 +140,65 @@  static void dma_blk_cb(void *opaque, int ret)
     dma_blk_unmap(dbs);
 
     while (dbs->sg_cur_index < dbs->sg->nsg) {
+        bool skip = false;
         cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
         cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
-        mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir);
-        if (!mem)
-            break;
-        qemu_iovec_add(&dbs->iov, mem, cur_len);
+
+        /*
+         * Make reads deterministic in icount mode.
+         * Windows sometimes issues disk read requests with
+         * overlapping SGs. It leads to non-determinism, because
+         * resulting buffer contents may be mixed from several
+         * sectors.
+         * This code crops SGs that were already read in this request.
+         */
+        if (use_icount
+            && dbs->dir == DMA_DIRECTION_FROM_DEVICE) {
+            int i;
+            for (i = 0 ; i < dbs->sg_cur_index ; ++i) {
+                if (dbs->sg->sg[i].base <= cur_addr
+                    && dbs->sg->sg[i].base + dbs->sg->sg[i].len > cur_addr) {
+                    cur_len = MIN(cur_len,
+                        dbs->sg->sg[i].base + dbs->sg->sg[i].len - cur_addr);
+                    skip = true;
+                    break;
+                } else if (cur_addr <= dbs->sg->sg[i].base
+                    && cur_addr + cur_len > dbs->sg->sg[i].base) {
+                    cur_len = dbs->sg->sg[i].base - cur_addr;
+                    break;
+                }
+            }
+        }
+
+        assert(cur_len);
+        if (!skip) {
+            mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir);
+            if (!mem)
+                break;
+            qemu_iovec_add(&dbs->iov, mem, cur_len);
+        } else {
+            if (dbs->iov.size != 0) {
+                break;
+            }
+            /* skip this SG */
+            dbs->offset += cur_len;
+        }
+
         dbs->sg_cur_byte += cur_len;
         if (dbs->sg_cur_byte == dbs->sg->sg[dbs->sg_cur_index].len) {
             dbs->sg_cur_byte = 0;
             ++dbs->sg_cur_index;
         }
+
+        /*
+         * All remaining SGs were skipped.
+         * This is not reschedule case, because we already
+         * performed the reads, and the last SGs were skipped.
+         */
+        if (dbs->sg_cur_index == dbs->sg->nsg && dbs->iov.size == 0) {
+            dma_complete(dbs, ret);
+            return;
+        }
     }
 
     if (dbs->iov.size == 0) {