diff mbox series

[v2] util/hbitmap: strict hbitmap_reset

Message ID 20190806152611.280389-1-vsementsov@virtuozzo.com
State New
Headers show
Series [v2] util/hbitmap: strict hbitmap_reset | expand

Commit Message

Vladimir Sementsov-Ogievskiy Aug. 6, 2019, 3:26 p.m. UTC
hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.

Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---

v2 based on Max's https://github.com/XanClic/qemu.git block
which will be merged soon to 4.1, and this patch goes to 4.2
Based-on: https://github.com/XanClic/qemu.git block

v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
we all agreed to just assert alignment instead of aligning down
automatically.

 include/qemu/hbitmap.h | 5 +++++
 tests/test-hbitmap.c   | 2 +-
 util/hbitmap.c         | 4 ++++
 3 files changed, 10 insertions(+), 1 deletion(-)

Comments

Max Reitz Aug. 6, 2019, 4:09 p.m. UTC | #1
On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
> hbitmap_reset has an unobvious property: it rounds requested region up.
> It may provoke bugs, like in recently fixed write-blocking mode of
> mirror: user calls reset on unaligned region, not keeping in mind that
> there are possible unrelated dirty bytes, covered by rounded-up region
> and information of this unrelated "dirtiness" will be lost.
> 
> Make hbitmap_reset strict: assert that arguments are aligned, allowing
> only one exception when @start + @count == hb->orig_size. It's needed
> to comfort users of hbitmap_next_dirty_area, which cares about
> hb->orig_size.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
> 
> v2 based on Max's https://github.com/XanClic/qemu.git block
> which will be merged soon to 4.1, and this patch goes to 4.2
> Based-on: https://github.com/XanClic/qemu.git block
> 
> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
> we all agreed to just assert alignment instead of aligning down
> automatically.
> 
>  include/qemu/hbitmap.h | 5 +++++
>  tests/test-hbitmap.c   | 2 +-
>  util/hbitmap.c         | 4 ++++
>  3 files changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
> index 4afbe6292e..7865e819ca 100644
> --- a/include/qemu/hbitmap.h
> +++ b/include/qemu/hbitmap.h
> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
>   * @count: Number of bits to reset.
>   *
>   * Reset a consecutive range of bits in an HBitmap.
> + * @start and @count must be aligned to bitmap granularity. The only exception
> + * is resetting the tail of the bitmap: @count may be equal to @start +
> + * hb->orig_size,

s/@start + hb->orig_size/hb->orig_size - @start/, I think.

>     in this case @count may be not aligned. @start + @count

+are

With those fixed:

Reviewed-by: Max Reitz <mreitz@redhat.com>

> + * allowed to be greater than hb->orig_size, but only if @start < hb->orig_size
> + * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
>   */
>  void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
>  
> diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
> index 592d8219db..2be56d1597 100644
> --- a/tests/test-hbitmap.c
> +++ b/tests/test-hbitmap.c
> @@ -423,7 +423,7 @@ static void test_hbitmap_granularity(TestHBitmapData *data,
>      hbitmap_test_check(data, 0);
>      hbitmap_test_set(data, 0, 3);
>      g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
> -    hbitmap_test_reset(data, 0, 1);
> +    hbitmap_test_reset(data, 0, 2);
>      g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
>  }
>  
> diff --git a/util/hbitmap.c b/util/hbitmap.c
> index bcc0acdc6a..586920cb52 100644
> --- a/util/hbitmap.c
> +++ b/util/hbitmap.c
> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count)
>      /* Compute range in the last layer.  */
>      uint64_t first;
>      uint64_t last = start + count - 1;
> +    uint64_t gran = 1ULL << hb->granularity;
> +
> +    assert(!(start & (gran - 1)));
> +    assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
>  
>      trace_hbitmap_reset(hb, start, count,
>                          start >> hb->granularity, last >> hb->granularity);
>
Vladimir Sementsov-Ogievskiy Aug. 6, 2019, 4:19 p.m. UTC | #2
06.08.2019 19:09, Max Reitz wrote:
> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>> hbitmap_reset has an unobvious property: it rounds requested region up.
>> It may provoke bugs, like in recently fixed write-blocking mode of
>> mirror: user calls reset on unaligned region, not keeping in mind that
>> there are possible unrelated dirty bytes, covered by rounded-up region
>> and information of this unrelated "dirtiness" will be lost.
>>
>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>> only one exception when @start + @count == hb->orig_size. It's needed
>> to comfort users of hbitmap_next_dirty_area, which cares about
>> hb->orig_size.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>
>> v2 based on Max's https://github.com/XanClic/qemu.git block
>> which will be merged soon to 4.1, and this patch goes to 4.2
>> Based-on: https://github.com/XanClic/qemu.git block
>>
>> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
>> we all agreed to just assert alignment instead of aligning down
>> automatically.
>>
>>   include/qemu/hbitmap.h | 5 +++++
>>   tests/test-hbitmap.c   | 2 +-
>>   util/hbitmap.c         | 4 ++++
>>   3 files changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
>> index 4afbe6292e..7865e819ca 100644
>> --- a/include/qemu/hbitmap.h
>> +++ b/include/qemu/hbitmap.h
>> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
>>    * @count: Number of bits to reset.
>>    *
>>    * Reset a consecutive range of bits in an HBitmap.
>> + * @start and @count must be aligned to bitmap granularity. The only exception
>> + * is resetting the tail of the bitmap: @count may be equal to @start +
>> + * hb->orig_size,
> 
> s/@start + hb->orig_size/hb->orig_size - @start/, I think.

Ha, I wanted to say start + count equal to orig_size. Yours is OK too of course.

> 
>>      in this case @count may be not aligned. @start + @count
> 
> +are
> 
> With those fixed:
> 
> Reviewed-by: Max Reitz <mreitz@redhat.com>

Thanks!

> 
>> + * allowed to be greater than hb->orig_size, but only if @start < hb->orig_size
>> + * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
>>    */
>>   void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
>>   
>> diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
>> index 592d8219db..2be56d1597 100644
>> --- a/tests/test-hbitmap.c
>> +++ b/tests/test-hbitmap.c
>> @@ -423,7 +423,7 @@ static void test_hbitmap_granularity(TestHBitmapData *data,
>>       hbitmap_test_check(data, 0);
>>       hbitmap_test_set(data, 0, 3);
>>       g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
>> -    hbitmap_test_reset(data, 0, 1);
>> +    hbitmap_test_reset(data, 0, 2);
>>       g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
>>   }
>>   
>> diff --git a/util/hbitmap.c b/util/hbitmap.c
>> index bcc0acdc6a..586920cb52 100644
>> --- a/util/hbitmap.c
>> +++ b/util/hbitmap.c
>> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count)
>>       /* Compute range in the last layer.  */
>>       uint64_t first;
>>       uint64_t last = start + count - 1;
>> +    uint64_t gran = 1ULL << hb->granularity;
>> +
>> +    assert(!(start & (gran - 1)));
>> +    assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
>>   
>>       trace_hbitmap_reset(hb, start, count,
>>                           start >> hb->granularity, last >> hb->granularity);
>>
> 
>
John Snow Aug. 7, 2019, 4:27 p.m. UTC | #3
On 8/6/19 12:19 PM, Vladimir Sementsov-Ogievskiy wrote:
> 06.08.2019 19:09, Max Reitz wrote:
>> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>>> hbitmap_reset has an unobvious property: it rounds requested region up.
>>> It may provoke bugs, like in recently fixed write-blocking mode of
>>> mirror: user calls reset on unaligned region, not keeping in mind that
>>> there are possible unrelated dirty bytes, covered by rounded-up region
>>> and information of this unrelated "dirtiness" will be lost.
>>>
>>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>>> only one exception when @start + @count == hb->orig_size. It's needed
>>> to comfort users of hbitmap_next_dirty_area, which cares about
>>> hb->orig_size.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>
>>> v2 based on Max's https://github.com/XanClic/qemu.git block
>>> which will be merged soon to 4.1, and this patch goes to 4.2
>>> Based-on: https://github.com/XanClic/qemu.git block
>>>
>>> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
>>> we all agreed to just assert alignment instead of aligning down
>>> automatically.
>>>
>>>   include/qemu/hbitmap.h | 5 +++++
>>>   tests/test-hbitmap.c   | 2 +-
>>>   util/hbitmap.c         | 4 ++++
>>>   3 files changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
>>> index 4afbe6292e..7865e819ca 100644
>>> --- a/include/qemu/hbitmap.h
>>> +++ b/include/qemu/hbitmap.h
>>> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
>>>    * @count: Number of bits to reset.
>>>    *
>>>    * Reset a consecutive range of bits in an HBitmap.
>>> + * @start and @count must be aligned to bitmap granularity. The only exception
>>> + * is resetting the tail of the bitmap: @count may be equal to @start +
>>> + * hb->orig_size,
>>
>> s/@start + hb->orig_size/hb->orig_size - @start/, I think.
> 
> Ha, I wanted to say start + count equal to orig_size. Yours is OK too of course.
> 
>>
>>>      in this case @count may be not aligned. @start + @count
>>
>> +are
>>
>> With those fixed:
>>
>> Reviewed-by: Max Reitz <mreitz@redhat.com>
> 
> Thanks!
> 

I'll add this to the pile for 4.2, after I fix the rebase conflicts that
arose from 4.1-rc4.

--js
Vladimir Sementsov-Ogievskiy Sept. 11, 2019, 3:13 p.m. UTC | #4
07.08.2019 19:27, John Snow wrote:
> 
> 
> On 8/6/19 12:19 PM, Vladimir Sementsov-Ogievskiy wrote:
>> 06.08.2019 19:09, Max Reitz wrote:
>>> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>>>> hbitmap_reset has an unobvious property: it rounds requested region up.
>>>> It may provoke bugs, like in recently fixed write-blocking mode of
>>>> mirror: user calls reset on unaligned region, not keeping in mind that
>>>> there are possible unrelated dirty bytes, covered by rounded-up region
>>>> and information of this unrelated "dirtiness" will be lost.
>>>>
>>>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>>>> only one exception when @start + @count == hb->orig_size. It's needed
>>>> to comfort users of hbitmap_next_dirty_area, which cares about
>>>> hb->orig_size.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>
>>>> v2 based on Max's https://github.com/XanClic/qemu.git block
>>>> which will be merged soon to 4.1, and this patch goes to 4.2
>>>> Based-on: https://github.com/XanClic/qemu.git block
>>>>
>>>> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
>>>> we all agreed to just assert alignment instead of aligning down
>>>> automatically.
>>>>
>>>>    include/qemu/hbitmap.h | 5 +++++
>>>>    tests/test-hbitmap.c   | 2 +-
>>>>    util/hbitmap.c         | 4 ++++
>>>>    3 files changed, 10 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
>>>> index 4afbe6292e..7865e819ca 100644
>>>> --- a/include/qemu/hbitmap.h
>>>> +++ b/include/qemu/hbitmap.h
>>>> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
>>>>     * @count: Number of bits to reset.
>>>>     *
>>>>     * Reset a consecutive range of bits in an HBitmap.
>>>> + * @start and @count must be aligned to bitmap granularity. The only exception
>>>> + * is resetting the tail of the bitmap: @count may be equal to @start +
>>>> + * hb->orig_size,
>>>
>>> s/@start + hb->orig_size/hb->orig_size - @start/, I think.
>>
>> Ha, I wanted to say start + count equal to orig_size. Yours is OK too of course.
>>
>>>
>>>>       in this case @count may be not aligned. @start + @count
>>>
>>> +are
>>>
>>> With those fixed:
>>>
>>> Reviewed-by: Max Reitz <mreitz@redhat.com>
>>
>> Thanks!
>>
> 
> I'll add this to the pile for 4.2, after I fix the rebase conflicts that
> arose from 4.1-rc4.
> 

Hi!

Didn't you forget, or should I resend?
John Snow Sept. 11, 2019, 5:59 p.m. UTC | #5
On 9/11/19 11:13 AM, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2019 19:27, John Snow wrote:
>>
>>
>> On 8/6/19 12:19 PM, Vladimir Sementsov-Ogievskiy wrote:
>>> 06.08.2019 19:09, Max Reitz wrote:
>>>> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>>>>> hbitmap_reset has an unobvious property: it rounds requested region up.
>>>>> It may provoke bugs, like in recently fixed write-blocking mode of
>>>>> mirror: user calls reset on unaligned region, not keeping in mind that
>>>>> there are possible unrelated dirty bytes, covered by rounded-up region
>>>>> and information of this unrelated "dirtiness" will be lost.
>>>>>
>>>>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>>>>> only one exception when @start + @count == hb->orig_size. It's needed
>>>>> to comfort users of hbitmap_next_dirty_area, which cares about
>>>>> hb->orig_size.
>>>>>
>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>>> ---
>>>>>
>>>>> v2 based on Max's https://github.com/XanClic/qemu.git block
>>>>> which will be merged soon to 4.1, and this patch goes to 4.2
>>>>> Based-on: https://github.com/XanClic/qemu.git block
>>>>>
>>>>> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
>>>>> we all agreed to just assert alignment instead of aligning down
>>>>> automatically.
>>>>>
>>>>>    include/qemu/hbitmap.h | 5 +++++
>>>>>    tests/test-hbitmap.c   | 2 +-
>>>>>    util/hbitmap.c         | 4 ++++
>>>>>    3 files changed, 10 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
>>>>> index 4afbe6292e..7865e819ca 100644
>>>>> --- a/include/qemu/hbitmap.h
>>>>> +++ b/include/qemu/hbitmap.h
>>>>> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
>>>>>     * @count: Number of bits to reset.
>>>>>     *
>>>>>     * Reset a consecutive range of bits in an HBitmap.
>>>>> + * @start and @count must be aligned to bitmap granularity. The only exception
>>>>> + * is resetting the tail of the bitmap: @count may be equal to @start +
>>>>> + * hb->orig_size,
>>>>
>>>> s/@start + hb->orig_size/hb->orig_size - @start/, I think.
>>>
>>> Ha, I wanted to say start + count equal to orig_size. Yours is OK too of course.
>>>
>>>>
>>>>>       in this case @count may be not aligned. @start + @count
>>>>
>>>> +are
>>>>
>>>> With those fixed:
>>>>
>>>> Reviewed-by: Max Reitz <mreitz@redhat.com>
>>>
>>> Thanks!
>>>
>>
>> I'll add this to the pile for 4.2, after I fix the rebase conflicts that
>> arose from 4.1-rc4.
>>
> 
> Hi!
> 
> Didn't you forget, or should I resend?
> 
> 

I must have dropped the patch by accident during the rebasing. As an
apology, I squashed in Max's suggestions from the list. Check that they
look OK, please?

Thanks, applied to my bitmaps tree:

https://github.com/jnsnow/qemu/commits/bitmaps
https://github.com/jnsnow/qemu.git

--js
Vladimir Sementsov-Ogievskiy Sept. 12, 2019, 8:20 a.m. UTC | #6
11.09.2019 20:59, John Snow wrote:
> 
> 
> On 9/11/19 11:13 AM, Vladimir Sementsov-Ogievskiy wrote:
>> 07.08.2019 19:27, John Snow wrote:
>>>
>>>
>>> On 8/6/19 12:19 PM, Vladimir Sementsov-Ogievskiy wrote:
>>>> 06.08.2019 19:09, Max Reitz wrote:
>>>>> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> hbitmap_reset has an unobvious property: it rounds requested region up.
>>>>>> It may provoke bugs, like in recently fixed write-blocking mode of
>>>>>> mirror: user calls reset on unaligned region, not keeping in mind that
>>>>>> there are possible unrelated dirty bytes, covered by rounded-up region
>>>>>> and information of this unrelated "dirtiness" will be lost.
>>>>>>
>>>>>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>>>>>> only one exception when @start + @count == hb->orig_size. It's needed
>>>>>> to comfort users of hbitmap_next_dirty_area, which cares about
>>>>>> hb->orig_size.
>>>>>>
>>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>>>> ---
>>>>>>
>>>>>> v2 based on Max's https://github.com/XanClic/qemu.git block
>>>>>> which will be merged soon to 4.1, and this patch goes to 4.2
>>>>>> Based-on: https://github.com/XanClic/qemu.git block
>>>>>>
>>>>>> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
>>>>>> we all agreed to just assert alignment instead of aligning down
>>>>>> automatically.
>>>>>>
>>>>>>     include/qemu/hbitmap.h | 5 +++++
>>>>>>     tests/test-hbitmap.c   | 2 +-
>>>>>>     util/hbitmap.c         | 4 ++++
>>>>>>     3 files changed, 10 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
>>>>>> index 4afbe6292e..7865e819ca 100644
>>>>>> --- a/include/qemu/hbitmap.h
>>>>>> +++ b/include/qemu/hbitmap.h
>>>>>> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
>>>>>>      * @count: Number of bits to reset.
>>>>>>      *
>>>>>>      * Reset a consecutive range of bits in an HBitmap.
>>>>>> + * @start and @count must be aligned to bitmap granularity. The only exception
>>>>>> + * is resetting the tail of the bitmap: @count may be equal to @start +
>>>>>> + * hb->orig_size,
>>>>>
>>>>> s/@start + hb->orig_size/hb->orig_size - @start/, I think.
>>>>
>>>> Ha, I wanted to say start + count equal to orig_size. Yours is OK too of course.
>>>>
>>>>>
>>>>>>        in this case @count may be not aligned. @start + @count
>>>>>
>>>>> +are
>>>>>
>>>>> With those fixed:
>>>>>
>>>>> Reviewed-by: Max Reitz <mreitz@redhat.com>
>>>>
>>>> Thanks!
>>>>
>>>
>>> I'll add this to the pile for 4.2, after I fix the rebase conflicts that
>>> arose from 4.1-rc4.
>>>
>>
>> Hi!
>>
>> Didn't you forget, or should I resend?
>>
>>
> 
> I must have dropped the patch by accident during the rebasing. As an
> apology, I squashed in Max's suggestions from the list. Check that they
> look OK, please?
> 
> Thanks, applied to my bitmaps tree:
> 
> https://github.com/jnsnow/qemu/commits/bitmaps
> https://github.com/jnsnow/qemu.git
> 

Thanks! Still:

Quote from your branch:

 >   * Reset a consecutive range of bits in an HBitmap.
 > + * @start and @count must be aligned to bitmap granularity. The only exception
 > + * is resetting the tail of the bitmap: @count may be equal to hb->orig_size -
 > + * start, in this case @count may be not aligned. @start + @count are

s/start/@start/ (corresponds to Max's comment, too)

Also, I'm not sure about "are" suggested by Max. "are" is for plural, but here I meant
one object: sum of @start and @count.

So, you may use exactly "Sum of @start and @count is" or "(@start + @count) sum is" or
just "(@start + @count) is", whichever you like more.

 > + * allowed to be greater than hb->orig_size, but only if @start < hb->orig_size
 > + * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
 >   */
 >  void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
John Snow Sept. 13, 2019, 6:49 p.m. UTC | #7
On 9/12/19 4:20 AM, Vladimir Sementsov-Ogievskiy wrote:
> 11.09.2019 20:59, John Snow wrote:
>>
>>
>> On 9/11/19 11:13 AM, Vladimir Sementsov-Ogievskiy wrote:
>>> 07.08.2019 19:27, John Snow wrote:
>>>>
>>>>
>>>> On 8/6/19 12:19 PM, Vladimir Sementsov-Ogievskiy wrote:
>>>>> 06.08.2019 19:09, Max Reitz wrote:
>>>>>> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>> hbitmap_reset has an unobvious property: it rounds requested region up.
>>>>>>> It may provoke bugs, like in recently fixed write-blocking mode of
>>>>>>> mirror: user calls reset on unaligned region, not keeping in mind that
>>>>>>> there are possible unrelated dirty bytes, covered by rounded-up region
>>>>>>> and information of this unrelated "dirtiness" will be lost.
>>>>>>>
>>>>>>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>>>>>>> only one exception when @start + @count == hb->orig_size. It's needed
>>>>>>> to comfort users of hbitmap_next_dirty_area, which cares about
>>>>>>> hb->orig_size.
>>>>>>>
>>>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>>>>> ---
>>>>>>>
>>>>>>> v2 based on Max's https://github.com/XanClic/qemu.git block
>>>>>>> which will be merged soon to 4.1, and this patch goes to 4.2
>>>>>>> Based-on: https://github.com/XanClic/qemu.git block
>>>>>>>
>>>>>>> v1 was "[PATCH] util/hbitmap: fix unaligned reset", and as I understand
>>>>>>> we all agreed to just assert alignment instead of aligning down
>>>>>>> automatically.
>>>>>>>
>>>>>>>     include/qemu/hbitmap.h | 5 +++++
>>>>>>>     tests/test-hbitmap.c   | 2 +-
>>>>>>>     util/hbitmap.c         | 4 ++++
>>>>>>>     3 files changed, 10 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
>>>>>>> index 4afbe6292e..7865e819ca 100644
>>>>>>> --- a/include/qemu/hbitmap.h
>>>>>>> +++ b/include/qemu/hbitmap.h
>>>>>>> @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
>>>>>>>      * @count: Number of bits to reset.
>>>>>>>      *
>>>>>>>      * Reset a consecutive range of bits in an HBitmap.
>>>>>>> + * @start and @count must be aligned to bitmap granularity. The only exception
>>>>>>> + * is resetting the tail of the bitmap: @count may be equal to @start +
>>>>>>> + * hb->orig_size,
>>>>>>
>>>>>> s/@start + hb->orig_size/hb->orig_size - @start/, I think.
>>>>>
>>>>> Ha, I wanted to say start + count equal to orig_size. Yours is OK too of course.
>>>>>
>>>>>>
>>>>>>>        in this case @count may be not aligned. @start + @count
>>>>>>
>>>>>> +are
>>>>>>
>>>>>> With those fixed:
>>>>>>
>>>>>> Reviewed-by: Max Reitz <mreitz@redhat.com>
>>>>>
>>>>> Thanks!
>>>>>
>>>>
>>>> I'll add this to the pile for 4.2, after I fix the rebase conflicts that
>>>> arose from 4.1-rc4.
>>>>
>>>
>>> Hi!
>>>
>>> Didn't you forget, or should I resend?
>>>
>>>
>>
>> I must have dropped the patch by accident during the rebasing. As an
>> apology, I squashed in Max's suggestions from the list. Check that they
>> look OK, please?
>>
>> Thanks, applied to my bitmaps tree:
>>
>> https://github.com/jnsnow/qemu/commits/bitmaps
>> https://github.com/jnsnow/qemu.git
>>
> 
> Thanks! Still:
> 
> Quote from your branch:
> 
>  >   * Reset a consecutive range of bits in an HBitmap.
>  > + * @start and @count must be aligned to bitmap granularity. The only exception
>  > + * is resetting the tail of the bitmap: @count may be equal to hb->orig_size -
>  > + * start, in this case @count may be not aligned. @start + @count are
> 
> s/start/@start/ (corresponds to Max's comment, too)
> 

OK, you got it.

> Also, I'm not sure about "are" suggested by Max. "are" is for plural, but here I meant
> one object: sum of @start and @count.
> 

There's not great agreement universally about how to treat things like
collective nouns. Sometimes "Data" is singular, but sometimes it's
plural. "It depends."

In this case, "start + count" refers to one sum, but two constituent
pieces, so it's functioning like a collective noun.

We might say "a + b (together) /are/ ..." but also "the sum of a + b /is/".

> So, you may use exactly "Sum of @start and @count is" or "(@start + @count) sum is" or
> just "(@start + @count) is", whichever you like more.
> 

I like using "the sum of @x and @y is" for being grammatically unambiguous.

updated and pushed.

(Sorry about my language again! --js)
Kevin Wolf Sept. 16, 2019, 8 a.m. UTC | #8
Am 13.09.2019 um 20:49 hat John Snow geschrieben:
> On 9/12/19 4:20 AM, Vladimir Sementsov-Ogievskiy wrote:
> > Also, I'm not sure about "are" suggested by Max. "are" is for plural, but here I meant
> > one object: sum of @start and @count.
> > 
> 
> There's not great agreement universally about how to treat things like
> collective nouns. Sometimes "Data" is singular, but sometimes it's
> plural. "It depends."
> 
> In this case, "start + count" refers to one sum, but two constituent
> pieces, so it's functioning like a collective noun.
> 
> We might say "a + b (together) /are/ ..." but also "the sum of a + b /is/".
> 
> > So, you may use exactly "Sum of @start and @count is" or "(@start + @count) sum is" or
> > just "(@start + @count) is", whichever you like more.
> > 
> 
> I like using "the sum of @x and @y is" for being grammatically unambiguous.
> 
> updated and pushed.
> 
> (Sorry about my language again! --js)

Time to revive https://patchwork.kernel.org/patch/8725621/? ;-)

Kevin
John Snow Sept. 16, 2019, 4:38 p.m. UTC | #9
On 9/16/19 4:00 AM, Kevin Wolf wrote:
> Am 13.09.2019 um 20:49 hat John Snow geschrieben:
>> On 9/12/19 4:20 AM, Vladimir Sementsov-Ogievskiy wrote:
>>> Also, I'm not sure about "are" suggested by Max. "are" is for plural, but here I meant
>>> one object: sum of @start and @count.
>>>
>>
>> There's not great agreement universally about how to treat things like
>> collective nouns. Sometimes "Data" is singular, but sometimes it's
>> plural. "It depends."
>>
>> In this case, "start + count" refers to one sum, but two constituent
>> pieces, so it's functioning like a collective noun.
>>
>> We might say "a + b (together) /are/ ..." but also "the sum of a + b /is/".
>>
>>> So, you may use exactly "Sum of @start and @count is" or "(@start + @count) sum is" or
>>> just "(@start + @count) is", whichever you like more.
>>>
>>
>> I like using "the sum of @x and @y is" for being grammatically unambiguous.
>>
>> updated and pushed.
>>
>> (Sorry about my language again! --js)
> 
> Time to revive https://patchwork.kernel.org/patch/8725621/? ;-)
> 
> Kevin
> 

Ja, bitte.
diff mbox series

Patch

diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
index 4afbe6292e..7865e819ca 100644
--- a/include/qemu/hbitmap.h
+++ b/include/qemu/hbitmap.h
@@ -132,6 +132,11 @@  void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
  * @count: Number of bits to reset.
  *
  * Reset a consecutive range of bits in an HBitmap.
+ * @start and @count must be aligned to bitmap granularity. The only exception
+ * is resetting the tail of the bitmap: @count may be equal to @start +
+ * hb->orig_size, in this case @count may be not aligned. @start + @count
+ * allowed to be greater than hb->orig_size, but only if @start < hb->orig_size
+ * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
  */
 void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
 
diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
index 592d8219db..2be56d1597 100644
--- a/tests/test-hbitmap.c
+++ b/tests/test-hbitmap.c
@@ -423,7 +423,7 @@  static void test_hbitmap_granularity(TestHBitmapData *data,
     hbitmap_test_check(data, 0);
     hbitmap_test_set(data, 0, 3);
     g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
-    hbitmap_test_reset(data, 0, 1);
+    hbitmap_test_reset(data, 0, 2);
     g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
 }
 
diff --git a/util/hbitmap.c b/util/hbitmap.c
index bcc0acdc6a..586920cb52 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -476,6 +476,10 @@  void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count)
     /* Compute range in the last layer.  */
     uint64_t first;
     uint64_t last = start + count - 1;
+    uint64_t gran = 1ULL << hb->granularity;
+
+    assert(!(start & (gran - 1)));
+    assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
 
     trace_hbitmap_reset(hb, start, count,
                         start >> hb->granularity, last >> hb->granularity);