diff mbox

qemu/xendisk: set maximum number of grants to be used

Message ID 4FACD9940200007800082F0D@nat28.tlf.novell.com
State New
Headers show

Commit Message

Jan Beulich May 11, 2012, 7:19 a.m. UTC
Legacy (non-pvops) gntdev drivers may require this to be done when the
number of grants intended to be used simultaneously exceeds a certain
driver specific default limit.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
qemu/xendisk: set maximum number of grants to be used

Legacy (non-pvops) gntdev drivers may require this to be done when the
number of grants intended to be used simultaneously exceeds a certain
driver specific default limit.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/hw/xen_disk.c
+++ b/hw/xen_disk.c
@@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
     if (xen_mode != XEN_EMULATE) {
         batch_maps = 1;
     }
+    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
+            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
+        xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
+                      strerror(errno));
 }
 
 static int blk_init(struct XenDevice *xendev)

Comments

Jan Beulich May 11, 2012, 2:19 p.m. UTC | #1
>>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich@suse.com> wrote:
> Legacy (non-pvops) gntdev drivers may require this to be done when the
> number of grants intended to be used simultaneously exceeds a certain
> driver specific default limit.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/hw/xen_disk.c
> +++ b/hw/xen_disk.c
> @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
>      if (xen_mode != XEN_EMULATE) {
>          batch_maps = 1;
>      }
> +    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> +            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)

In more extensive testing it appears that very rarely this value is still
too low:

xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)

342 + 11 = 353 > 352 = 32 * 11

Could someone help out here? I first thought this might be due to
use_aio being non-zero, but ioreq_start() doesn't permit more than
max_requests struct ioreqs-s to be around.

Additionally, shouldn't the driver be smarter and gracefully handle
grant mapping failures (as the per-domain map track table in the
hypervisor is a finite resource)?

Jan

> +        xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
> +                      strerror(errno));
>  }
>  
>  static int blk_init(struct XenDevice *xendev)
Stefano Stabellini May 11, 2012, 5:07 p.m. UTC | #2
On Fri, 11 May 2012, Jan Beulich wrote:
> >>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich@suse.com> wrote:
> > Legacy (non-pvops) gntdev drivers may require this to be done when the
> > number of grants intended to be used simultaneously exceeds a certain
> > driver specific default limit.
> > 
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > --- a/hw/xen_disk.c
> > +++ b/hw/xen_disk.c
> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
> >      if (xen_mode != XEN_EMULATE) {
> >          batch_maps = 1;
> >      }
> > +    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> > +            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
> 
> In more extensive testing it appears that very rarely this value is still
> too low:
> 
> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
> 
> 342 + 11 = 353 > 352 = 32 * 11
> 
> Could someone help out here? I first thought this might be due to
> use_aio being non-zero, but ioreq_start() doesn't permit more than
> max_requests struct ioreqs-s to be around.

Actually 342 + 11 = 353, that should be still OK because it is equal to
32 * 11 + 1, where the additional 1 is for the ring, right?


> Additionally, shouldn't the driver be smarter and gracefully handle
> grant mapping failures (as the per-domain map track table in the
> hypervisor is a finite resource)?

yes, probably
Jan Beulich May 14, 2012, 7:41 a.m. UTC | #3
>>> On 11.05.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 11 May 2012, Jan Beulich wrote:
>> >>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > Legacy (non-pvops) gntdev drivers may require this to be done when the
>> > number of grants intended to be used simultaneously exceeds a certain
>> > driver specific default limit.
>> > 
>> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> > 
>> > --- a/hw/xen_disk.c
>> > +++ b/hw/xen_disk.c
>> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
>> >      if (xen_mode != XEN_EMULATE) {
>> >          batch_maps = 1;
>> >      }
>> > +    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
>> > +            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
>> 
>> In more extensive testing it appears that very rarely this value is still
>> too low:
>> 
>> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
>> 
>> 342 + 11 = 353 > 352 = 32 * 11
>> 
>> Could someone help out here? I first thought this might be due to
>> use_aio being non-zero, but ioreq_start() doesn't permit more than
>> max_requests struct ioreqs-s to be around.
> 
> Actually 342 + 11 = 353, that should be still OK because it is equal to
> 32 * 11 + 1, where the additional 1 is for the ring, right?

The +1 is for the ring, yes. And the calculation in the driver actually
appears to be fine. It's rather an issue with fragmentation afaict -
the driver needs to allocate 11 contiguous slots, and such may not
be available. I'll send out a v2 of the patch soon, taking fragmentation
into account.

Jan
diff mbox

Patch

--- a/hw/xen_disk.c
+++ b/hw/xen_disk.c
@@ -536,6 +536,10 @@  static void blk_alloc(struct XenDevice *
     if (xen_mode != XEN_EMULATE) {
         batch_maps = 1;
     }
+    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
+            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
+        xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
+                      strerror(errno));
 }
 
 static int blk_init(struct XenDevice *xendev)