diff mbox

[05/27] block/parallels: add get_block_status

Message ID 1426069701-1405-6-git-send-email-den@openvz.org
State New
Headers show

Commit Message

Denis V. Lunev March 11, 2015, 10:27 a.m. UTC
From: Roman Kagan <rkagan@parallels.com>

Implement VFS method for get_block_status to Parallels format driver.

Signed-off-by: Roman Kagan <rkagan@parallels.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/parallels.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

Comments

Stefan Hajnoczi April 22, 2015, 12:39 p.m. UTC | #1
On Wed, Mar 11, 2015 at 01:27:59PM +0300, Denis V. Lunev wrote:
> +static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
> +        int64_t sector_num, int nb_sectors, int *pnum)
> +{
> +    BDRVParallelsState *s = bs->opaque;
> +    int64_t offset;
> +
> +    qemu_co_mutex_lock(&s->lock);
> +    offset = seek_to_sector(s, sector_num);
> +    qemu_co_mutex_unlock(&s->lock);

The lock isn't necessary here yet.  It may become necessary when write
support is added, but probably not even then since seek_to_sector()
cannot yield (it's not a coroutine function), so there are no possible
races with other coroutines.

The same also applies for parallels_co_read().  The lock there isn't
necessary since the block driver is read-only and has no mutable state -
there is no shared state that needs lock protection.
Denis V. Lunev April 22, 2015, 12:42 p.m. UTC | #2
On 22/04/15 15:39, Stefan Hajnoczi wrote:
> On Wed, Mar 11, 2015 at 01:27:59PM +0300, Denis V. Lunev wrote:
>> +static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
>> +        int64_t sector_num, int nb_sectors, int *pnum)
>> +{
>> +    BDRVParallelsState *s = bs->opaque;
>> +    int64_t offset;
>> +
>> +    qemu_co_mutex_lock(&s->lock);
>> +    offset = seek_to_sector(s, sector_num);
>> +    qemu_co_mutex_unlock(&s->lock);
> The lock isn't necessary here yet.  It may become necessary when write
> support is added, but probably not even then since seek_to_sector()
> cannot yield (it's not a coroutine function), so there are no possible
> races with other coroutines.
>
> The same also applies for parallels_co_read().  The lock there isn't
> necessary since the block driver is read-only and has no mutable state -
> there is no shared state that needs lock protection.
do you propose to drop the lock here and add it later with write
support (patch 08)? I'd prefer to stay on the safe side. Yes, the
lock is not mandatory at the moment but in 2 patches it will be
mandatory.

I can extend the comment to clarify this.
Stefan Hajnoczi April 23, 2015, 9:03 a.m. UTC | #3
On Wed, Apr 22, 2015 at 03:42:23PM +0300, Denis V. Lunev wrote:
> On 22/04/15 15:39, Stefan Hajnoczi wrote:
> >On Wed, Mar 11, 2015 at 01:27:59PM +0300, Denis V. Lunev wrote:
> >>+static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
> >>+        int64_t sector_num, int nb_sectors, int *pnum)
> >>+{
> >>+    BDRVParallelsState *s = bs->opaque;
> >>+    int64_t offset;
> >>+
> >>+    qemu_co_mutex_lock(&s->lock);
> >>+    offset = seek_to_sector(s, sector_num);
> >>+    qemu_co_mutex_unlock(&s->lock);
> >The lock isn't necessary here yet.  It may become necessary when write
> >support is added, but probably not even then since seek_to_sector()
> >cannot yield (it's not a coroutine function), so there are no possible
> >races with other coroutines.
> >
> >The same also applies for parallels_co_read().  The lock there isn't
> >necessary since the block driver is read-only and has no mutable state -
> >there is no shared state that needs lock protection.
> do you propose to drop the lock here and add it later with write
> support (patch 08)? I'd prefer to stay on the safe side. Yes, the
> lock is not mandatory at the moment but in 2 patches it will be
> mandatory.

This locking approach is very conservative.  At the end of the series,
the only region that needs a lock is allocate_clusters() because
bdrv_write_zeroes() may yield before s->bat_bitmap[] is assigned - we
need to prevent simultaneous allocate_clusters() calls to the same
clusters.

I'm fine with the conservative approach, but please add a doc comment to
BDRVParallelsState.lock explaining what this lock protects.  This way
people reading the code will understand your philosophy and be able to
follow it when modifying the code.

Stefan
Denis V. Lunev April 23, 2015, 9:23 a.m. UTC | #4
On 23/04/15 12:03, Stefan Hajnoczi wrote:
> On Wed, Apr 22, 2015 at 03:42:23PM +0300, Denis V. Lunev wrote:
>> On 22/04/15 15:39, Stefan Hajnoczi wrote:
>>> On Wed, Mar 11, 2015 at 01:27:59PM +0300, Denis V. Lunev wrote:
>>>> +static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
>>>> +        int64_t sector_num, int nb_sectors, int *pnum)
>>>> +{
>>>> +    BDRVParallelsState *s = bs->opaque;
>>>> +    int64_t offset;
>>>> +
>>>> +    qemu_co_mutex_lock(&s->lock);
>>>> +    offset = seek_to_sector(s, sector_num);
>>>> +    qemu_co_mutex_unlock(&s->lock);
>>> The lock isn't necessary here yet.  It may become necessary when write
>>> support is added, but probably not even then since seek_to_sector()
>>> cannot yield (it's not a coroutine function), so there are no possible
>>> races with other coroutines.
>>>
>>> The same also applies for parallels_co_read().  The lock there isn't
>>> necessary since the block driver is read-only and has no mutable state -
>>> there is no shared state that needs lock protection.
>> do you propose to drop the lock here and add it later with write
>> support (patch 08)? I'd prefer to stay on the safe side. Yes, the
>> lock is not mandatory at the moment but in 2 patches it will be
>> mandatory.
> This locking approach is very conservative.  At the end of the series,
> the only region that needs a lock is allocate_clusters() because
> bdrv_write_zeroes() may yield before s->bat_bitmap[] is assigned - we
> need to prevent simultaneous allocate_clusters() calls to the same
> clusters.
>
> I'm fine with the conservative approach, but please add a doc comment to
> BDRVParallelsState.lock explaining what this lock protects.  This way
> people reading the code will understand your philosophy and be able to
> follow it when modifying the code.
>
> Stefan
ok. sounds good for me.

I would like to implement the code as conservative as possible at the
beginning and place optimizations later step-by-step to have a
clear path to bisect the introductory patch.

At the moment I do not think that this locking scheme will affect
the performance but I can be wrong. There are a lot of stuff to
be implemented from the functional point of view before this will
be needed to tweak from my point of view.
Stefan Hajnoczi April 24, 2015, 8:27 a.m. UTC | #5
On Thu, Apr 23, 2015 at 12:23:43PM +0300, Denis V. Lunev wrote:
> On 23/04/15 12:03, Stefan Hajnoczi wrote:
> >On Wed, Apr 22, 2015 at 03:42:23PM +0300, Denis V. Lunev wrote:
> >>On 22/04/15 15:39, Stefan Hajnoczi wrote:
> >>>On Wed, Mar 11, 2015 at 01:27:59PM +0300, Denis V. Lunev wrote:
> >>>>+static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
> >>>>+        int64_t sector_num, int nb_sectors, int *pnum)
> >>>>+{
> >>>>+    BDRVParallelsState *s = bs->opaque;
> >>>>+    int64_t offset;
> >>>>+
> >>>>+    qemu_co_mutex_lock(&s->lock);
> >>>>+    offset = seek_to_sector(s, sector_num);
> >>>>+    qemu_co_mutex_unlock(&s->lock);
> >>>The lock isn't necessary here yet.  It may become necessary when write
> >>>support is added, but probably not even then since seek_to_sector()
> >>>cannot yield (it's not a coroutine function), so there are no possible
> >>>races with other coroutines.
> >>>
> >>>The same also applies for parallels_co_read().  The lock there isn't
> >>>necessary since the block driver is read-only and has no mutable state -
> >>>there is no shared state that needs lock protection.
> >>do you propose to drop the lock here and add it later with write
> >>support (patch 08)? I'd prefer to stay on the safe side. Yes, the
> >>lock is not mandatory at the moment but in 2 patches it will be
> >>mandatory.
> >This locking approach is very conservative.  At the end of the series,
> >the only region that needs a lock is allocate_clusters() because
> >bdrv_write_zeroes() may yield before s->bat_bitmap[] is assigned - we
> >need to prevent simultaneous allocate_clusters() calls to the same
> >clusters.
> >
> >I'm fine with the conservative approach, but please add a doc comment to
> >BDRVParallelsState.lock explaining what this lock protects.  This way
> >people reading the code will understand your philosophy and be able to
> >follow it when modifying the code.
> >
> >Stefan
> ok. sounds good for me.
> 
> I would like to implement the code as conservative as possible at the
> beginning and place optimizations later step-by-step to have a
> clear path to bisect the introductory patch.
> 
> At the moment I do not think that this locking scheme will affect
> the performance but I can be wrong. There are a lot of stuff to
> be implemented from the functional point of view before this will
> be needed to tweak from my point of view.

Okay.

Stefan
diff mbox

Patch

diff --git a/block/parallels.c b/block/parallels.c
index 8770c82..b469984 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -166,6 +166,26 @@  static int cluster_remainder(BDRVParallelsState *s, int64_t sector_num,
     return MIN(nb_sectors, ret);
 }
 
+static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
+        int64_t sector_num, int nb_sectors, int *pnum)
+{
+    BDRVParallelsState *s = bs->opaque;
+    int64_t offset;
+
+    qemu_co_mutex_lock(&s->lock);
+    offset = seek_to_sector(s, sector_num);
+    qemu_co_mutex_unlock(&s->lock);
+
+    *pnum = cluster_remainder(s, sector_num, nb_sectors);
+
+    if (offset < 0) {
+        return 0;
+    }
+
+    return (offset << BDRV_SECTOR_BITS) |
+        BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
+}
+
 static int parallels_read(BlockDriverState *bs, int64_t sector_num,
                     uint8_t *buf, int nb_sectors)
 {
@@ -213,6 +233,7 @@  static BlockDriver bdrv_parallels = {
     .bdrv_open		= parallels_open,
     .bdrv_read          = parallels_co_read,
     .bdrv_close		= parallels_close,
+    .bdrv_co_get_block_status = parallels_co_get_block_status,
 };
 
 static void bdrv_parallels_init(void)