diff mbox

[v7,RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support

Message ID CAAo6VWPhSwfqVGE4Ba08F6Kj35k--eAnSEdR0qHo-cP2Fi95=Q@mail.gmail.com
State New
Headers show

Commit Message

Ashish Mittal Oct. 11, 2016, 7:56 a.m. UTC
Checked in a test server to libqnio that allows to open, read, write
to a vxhs vdisk using the qemu-io binary.

Steps to run the test server:
(1) Touch a file in /tmp with the base-name of the vdisk ID to be opened.
touch /tmp/\{98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a\}
(2) Start the test server. It listens on port 9999 by default.
/path/to/qnio_server
(3) In another terminal, start the qemu-io binary and open the vdisk
qemu-io> open vxhs://127.0.0.1:9999/%7B98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a%7D
(4) Now you can write and read data from the vdisk that is backed by a file.
qemu-io> writev -P 81 0 1k
qemu-io> read -v 0 1k

Following change would be needed to my last patch to allow opening of the vdisk:
$ git diff
     }

Will work on the qemu-iotests test suite next.

Regards,
Ashish

On Tue, Oct 4, 2016 at 9:02 PM, Jeff Cody <jcody@redhat.com> wrote:
> On Wed, Sep 28, 2016 at 12:13:32PM +0100, Stefan Hajnoczi wrote:
>> On Tue, Sep 27, 2016 at 09:09:49PM -0700, Ashish Mittal wrote:
>
> [...]
>
>> > +/*
>> > + * This is called by QEMU when a flush gets triggered from within
>> > + * a guest at the block layer, either for IDE or SCSI disks.
>> > + */
>> > +static int vxhs_co_flush(BlockDriverState *bs)
>>
>> This is called from coroutine context, please add the coroutine_fn
>> function attribute to document this.
>>
>> > +{
>> > +    BDRVVXHSState *s = bs->opaque;
>> > +    int64_t size = 0;
>> > +    int ret = 0;
>> > +
>> > +    /*
>> > +     * VDISK_AIO_FLUSH ioctl is a no-op at present and will
>> > +     * always return success. This could change in the future.
>> > +     */
>> > +    ret = vxhs_qnio_iio_ioctl(s->qnio_ctx,
>> > +            s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
>> > +            VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC);
>>
>> This function is not allowed to block.  It cannot do a synchronous
>> flush.  This line is misleading because the constant is called
>> VDISK_AIO_FLUSH, but looking at the library code I see it's actually a
>> synchronous call that ends up in a loop that sleeps (!) waiting for the
>> response.
>>
>> Please do an async flush and qemu_coroutine_yield() to return
>> control to QEMU's event loop.  When the flush completes you can
>> qemu_coroutine_enter() again to return from this function.
>>
>> > +
>> > +    if (ret < 0) {
>> > +        trace_vxhs_co_flush(s->vdisk_guid, ret, errno);
>> > +        vxhs_close(bs);
>>
>> This looks unsafe.  Won't it cause double close() calls for s->fds[]
>> when bdrv_close() is called later?
>>
>
> Calling the close on a failed flush is a good idea, in my opinion.  However,
> to make it safe, bs->drv MUST be set to NULL after the call to vxhs_close().
> That will prevent double free / close calls,  and also fail out new I/O.
> (This is actually what the gluster driver does, for instance).
>
> Jeff
>
>
>

Comments

Jeff Cody Oct. 18, 2016, 7:10 p.m. UTC | #1
On Tue, Oct 11, 2016 at 12:56:06AM -0700, ashish mittal wrote:
> Checked in a test server to libqnio that allows to open, read, write
> to a vxhs vdisk using the qemu-io binary.
> 
> Steps to run the test server:
> (1) Touch a file in /tmp with the base-name of the vdisk ID to be opened.
> touch /tmp/\{98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a\}
> (2) Start the test server. It listens on port 9999 by default.
> /path/to/qnio_server
> (3) In another terminal, start the qemu-io binary and open the vdisk
> qemu-io> open vxhs://127.0.0.1:9999/%7B98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a%7D
> (4) Now you can write and read data from the vdisk that is backed by a file.
> qemu-io> writev -P 81 0 1k
> qemu-io> read -v 0 1k
> 
> Following change would be needed to my last patch to allow opening of the vdisk:
> $ git diff
> diff --git a/block/vxhs.c b/block/vxhs.c
> index 90a4343..d849a9b 100644
> --- a/block/vxhs.c
> +++ b/block/vxhs.c
> @@ -1215,7 +1215,7 @@ static int vxhs_qemu_init(QDict *options,
> BDRVVXHSState *s,
>      }
> 
>      ret = vxhs_qnio_iio_open(cfd, of_vsa_addr, rfd, file_name);
> -    if (!ret) {
> +    if (ret) {
>          error_setg(&local_err, "Failed qnio_iio_open");
>          ret = -EIO;
>      }
> 
> Will work on the qemu-iotests test suite next.
> 
> Regards,
> Ashish
> 

Hi Ashish,

I wanted to check and see how the qemu-iotest suite was coming along.

A couple of things on the test server:

It is hardcoded to only accept 4MB images.  Is it possible to relax that
restriction (is a restriction even needed)?

Also, the test server is hardcoded to use the path "/tmp".   This needs to
be passed in as an argument, as qemu-iotests will want to run everything
from its scratch test directory.

A suggestion: it would also be a good idea, if possible, to enable all the
supported ioctl commands in the test server.

Jeff
Ketan Nilangekar Oct. 19, 2016, 8:01 p.m. UTC | #2
Hi Jeff,

Please see my comments inline.

Thanks,
Ketan.





On 10/18/16, 12:10 PM, "Jeff Cody" <jcody@redhat.com> wrote:

>On Tue, Oct 11, 2016 at 12:56:06AM -0700, ashish mittal wrote:

>> Checked in a test server to libqnio that allows to open, read, write

>> to a vxhs vdisk using the qemu-io binary.

>> 

>> Steps to run the test server:

>> (1) Touch a file in /tmp with the base-name of the vdisk ID to be opened.

>> touch /tmp/\{98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a\}

>> (2) Start the test server. It listens on port 9999 by default.

>> /path/to/qnio_server

>> (3) In another terminal, start the qemu-io binary and open the vdisk

>> qemu-io> open vxhs://127.0.0.1:9999/%7B98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a%7D

>> (4) Now you can write and read data from the vdisk that is backed by a file.

>> qemu-io> writev -P 81 0 1k

>> qemu-io> read -v 0 1k

>> 

>> Following change would be needed to my last patch to allow opening of the vdisk:

>> $ git diff

>> diff --git a/block/vxhs.c b/block/vxhs.c

>> index 90a4343..d849a9b 100644

>> --- a/block/vxhs.c

>> +++ b/block/vxhs.c

>> @@ -1215,7 +1215,7 @@ static int vxhs_qemu_init(QDict *options,

>> BDRVVXHSState *s,

>>      }

>> 

>>      ret = vxhs_qnio_iio_open(cfd, of_vsa_addr, rfd, file_name);

>> -    if (!ret) {

>> +    if (ret) {

>>          error_setg(&local_err, "Failed qnio_iio_open");

>>          ret = -EIO;

>>      }

>> 

>> Will work on the qemu-iotests test suite next.

>> 

>> Regards,

>> Ashish

>> 

>

>Hi Ashish,

>

>I wanted to check and see how the qemu-iotest suite was coming along.

>

>A couple of things on the test server:

>

>It is hardcoded to only accept 4MB images.  Is it possible to relax that

>restriction (is a restriction even needed)?


We are removing this restriction from the QEMU driver and qnio. 

>

>Also, the test server is hardcoded to use the path "/tmp".   This needs to

>be passed in as an argument, as qemu-iotests will want to run everything

>from its scratch test directory.


This is being worked upon and will be part of the next checkin.

>

>A suggestion: it would also be a good idea, if possible, to enable all the

>supported ioctl commands in the test server.


We will look into this.

>

>Jeff
diff mbox

Patch

diff --git a/block/vxhs.c b/block/vxhs.c
index 90a4343..d849a9b 100644
--- a/block/vxhs.c
+++ b/block/vxhs.c
@@ -1215,7 +1215,7 @@  static int vxhs_qemu_init(QDict *options,
BDRVVXHSState *s,
     }

     ret = vxhs_qnio_iio_open(cfd, of_vsa_addr, rfd, file_name);
-    if (!ret) {
+    if (ret) {
         error_setg(&local_err, "Failed qnio_iio_open");
         ret = -EIO;