diff mbox series

virtio-pci: correctly set virtio pci queue mem multiplier

Message ID 20240212075233.1507612-1-schalla@marvell.com
State New
Headers show
Series virtio-pci: correctly set virtio pci queue mem multiplier | expand

Commit Message

Srujana Challa Feb. 12, 2024, 7:52 a.m. UTC
Currently, virtio_pci_queue_mem_mult function returns 4K when
VIRTIO_PCI_FLAG_PAGE_PER_VQ is set. But this is not correct
when host has page size as 64K.
This patch fixes the same.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 hw/virtio/virtio-pci.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Michael S. Tsirkin Feb. 13, 2024, 10:47 a.m. UTC | #1
On Mon, Feb 12, 2024 at 01:22:33PM +0530, Srujana Challa wrote:
> Currently, virtio_pci_queue_mem_mult function returns 4K when
> VIRTIO_PCI_FLAG_PAGE_PER_VQ is set. But this is not correct
> when host has page size as 64K.
> This patch fixes the same.
> 
> Signed-off-by: Srujana Challa <schalla@marvell.com>

You can't tweak guest visible values like this without
compat machinery. It's also going to consume a ton more
phys memory - can this break any configs?
Why is this a problem? Just with vdpa?

> ---
>  hw/virtio/virtio-pci.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index e433879542..028df99991 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -316,12 +316,10 @@ static bool virtio_pci_ioeventfd_enabled(DeviceState *d)
>      return (proxy->flags & VIRTIO_PCI_FLAG_USE_IOEVENTFD) != 0;
>  }
>  
> -#define QEMU_VIRTIO_PCI_QUEUE_MEM_MULT 0x1000
> -
>  static inline int virtio_pci_queue_mem_mult(struct VirtIOPCIProxy *proxy)
>  {
>      return (proxy->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ) ?
> -        QEMU_VIRTIO_PCI_QUEUE_MEM_MULT : 4;
> +        qemu_real_host_page_size()  : 4;
>  }
>  
>  static int virtio_pci_ioeventfd_assign(DeviceState *d, EventNotifier *notifier,
> -- 
> 2.25.1
Srujana Challa Feb. 13, 2024, 11:50 a.m. UTC | #2
> Subject: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci queue mem
> multiplier
> 
> External Email
> 
> ----------------------------------------------------------------------
> On Mon, Feb 12, 2024 at 01:22:33PM +0530, Srujana Challa wrote:
> > Currently, virtio_pci_queue_mem_mult function returns 4K when
> > VIRTIO_PCI_FLAG_PAGE_PER_VQ is set. But this is not correct when host
> > has page size as 64K.
> > This patch fixes the same.
> >
> > Signed-off-by: Srujana Challa <schalla@marvell.com>
> 
> You can't tweak guest visible values like this without compat machinery. It's
> also going to consume a ton more phys memory - can this break any configs?
> Why is this a problem? Just with vdpa?

We are observing the issue with vdpa when host has page size of 64K. We haven't
verified any other backends. I think, any backend that uses below API would fail
if host has page size other than 4K right?
And also as per VIRTIO_PCI_FLAG_PAGE_PER_VQ, it should be equal to
page_size right?

static int virtio_pci_set_host_notifier_mr(DeviceState *d, int n,
                                           MemoryRegion *mr, bool assign)
{
    VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
    int offset;

    if (n >= VIRTIO_QUEUE_MAX || !virtio_pci_modern(proxy) ||
        virtio_pci_queue_mem_mult(proxy) != memory_region_size(mr)) {
        return -1;
    }

> 
> > ---
> >  hw/virtio/virtio-pci.c | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> >
> > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index
> > e433879542..028df99991 100644
> > --- a/hw/virtio/virtio-pci.c
> > +++ b/hw/virtio/virtio-pci.c
> > @@ -316,12 +316,10 @@ static bool
> virtio_pci_ioeventfd_enabled(DeviceState *d)
> >      return (proxy->flags & VIRTIO_PCI_FLAG_USE_IOEVENTFD) != 0;  }
> >
> > -#define QEMU_VIRTIO_PCI_QUEUE_MEM_MULT 0x1000
> > -
> >  static inline int virtio_pci_queue_mem_mult(struct VirtIOPCIProxy
> > *proxy)  {
> >      return (proxy->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ) ?
> > -        QEMU_VIRTIO_PCI_QUEUE_MEM_MULT : 4;
> > +        qemu_real_host_page_size()  : 4;
> >  }
> >
> >  static int virtio_pci_ioeventfd_assign(DeviceState *d, EventNotifier
> > *notifier,
> > --
> > 2.25.1
Michael S. Tsirkin Feb. 13, 2024, 12:03 p.m. UTC | #3
On Tue, Feb 13, 2024 at 11:50:34AM +0000, Srujana Challa wrote:
> > Subject: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci queue mem
> > multiplier
> > 
> > External Email
> > 
> > ----------------------------------------------------------------------
> > On Mon, Feb 12, 2024 at 01:22:33PM +0530, Srujana Challa wrote:
> > > Currently, virtio_pci_queue_mem_mult function returns 4K when
> > > VIRTIO_PCI_FLAG_PAGE_PER_VQ is set. But this is not correct when host
> > > has page size as 64K.
> > > This patch fixes the same.
> > >
> > > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > 
> > You can't tweak guest visible values like this without compat machinery. It's
> > also going to consume a ton more phys memory - can this break any configs?
> > Why is this a problem? Just with vdpa?
> 
> We are observing the issue with vdpa when host has page size of 64K. We haven't
> verified any other backends. I think, any backend that uses below API would fail
> if host has page size other than 4K right?
> And also as per VIRTIO_PCI_FLAG_PAGE_PER_VQ, it should be equal to
> page_size right?
> 
> static int virtio_pci_set_host_notifier_mr(DeviceState *d, int n,
>                                            MemoryRegion *mr, bool assign)
> {
>     VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
>     int offset;
> 
>     if (n >= VIRTIO_QUEUE_MAX || !virtio_pci_modern(proxy) ||
>         virtio_pci_queue_mem_mult(proxy) != memory_region_size(mr)) {
>         return -1;
>     }

Yes but not everyone uses that right? Plain virtio in software with
no tricks doesn't care?


> > 
> > > ---
> > >  hw/virtio/virtio-pci.c | 4 +---
> > >  1 file changed, 1 insertion(+), 3 deletions(-)
> > >
> > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index
> > > e433879542..028df99991 100644
> > > --- a/hw/virtio/virtio-pci.c
> > > +++ b/hw/virtio/virtio-pci.c
> > > @@ -316,12 +316,10 @@ static bool
> > virtio_pci_ioeventfd_enabled(DeviceState *d)
> > >      return (proxy->flags & VIRTIO_PCI_FLAG_USE_IOEVENTFD) != 0;  }
> > >
> > > -#define QEMU_VIRTIO_PCI_QUEUE_MEM_MULT 0x1000
> > > -
> > >  static inline int virtio_pci_queue_mem_mult(struct VirtIOPCIProxy
> > > *proxy)  {
> > >      return (proxy->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ) ?
> > > -        QEMU_VIRTIO_PCI_QUEUE_MEM_MULT : 4;
> > > +        qemu_real_host_page_size()  : 4;
> > >  }
> > >
> > >  static int virtio_pci_ioeventfd_assign(DeviceState *d, EventNotifier
> > > *notifier,
> > > --
> > > 2.25.1
Srujana Challa Feb. 13, 2024, 12:37 p.m. UTC | #4
> Subject: Re: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci queue mem
> multiplier
> 
> On Tue, Feb 13, 2024 at 11:50:34AM +0000, Srujana Challa wrote:
> > > Subject: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci
> > > queue mem multiplier
> > >
> > > External Email
> > >
> > > --------------------------------------------------------------------
> > > -- On Mon, Feb 12, 2024 at 01:22:33PM +0530, Srujana Challa wrote:
> > > > Currently, virtio_pci_queue_mem_mult function returns 4K when
> > > > VIRTIO_PCI_FLAG_PAGE_PER_VQ is set. But this is not correct when
> > > > host has page size as 64K.
> > > > This patch fixes the same.
> > > >
> > > > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > >
> > > You can't tweak guest visible values like this without compat
> > > machinery. It's also going to consume a ton more phys memory - can this
> break any configs?
> > > Why is this a problem? Just with vdpa?
> >
> > We are observing the issue with vdpa when host has page size of 64K.
> > We haven't verified any other backends. I think, any backend that uses
> > below API would fail if host has page size other than 4K right?
> > And also as per VIRTIO_PCI_FLAG_PAGE_PER_VQ, it should be equal to
> > page_size right?
> >
> > static int virtio_pci_set_host_notifier_mr(DeviceState *d, int n,
> >                                            MemoryRegion *mr, bool
> > assign) {
> >     VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
> >     int offset;
> >
> >     if (n >= VIRTIO_QUEUE_MAX || !virtio_pci_modern(proxy) ||
> >         virtio_pci_queue_mem_mult(proxy) != memory_region_size(mr)) {
> >         return -1;
> >     }
> 
> Yes but not everyone uses that right? Plain virtio in software with no tricks
> doesn't care?
Yes,  any other better ways to address this issue.?

> 
> 
> > >
> > > > ---
> > > >  hw/virtio/virtio-pci.c | 4 +---
> > > >  1 file changed, 1 insertion(+), 3 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index
> > > > e433879542..028df99991 100644
> > > > --- a/hw/virtio/virtio-pci.c
> > > > +++ b/hw/virtio/virtio-pci.c
> > > > @@ -316,12 +316,10 @@ static bool
> > > virtio_pci_ioeventfd_enabled(DeviceState *d)
> > > >      return (proxy->flags & VIRTIO_PCI_FLAG_USE_IOEVENTFD) != 0;
> > > > }
> > > >
> > > > -#define QEMU_VIRTIO_PCI_QUEUE_MEM_MULT 0x1000
> > > > -
> > > >  static inline int virtio_pci_queue_mem_mult(struct VirtIOPCIProxy
> > > > *proxy)  {
> > > >      return (proxy->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ) ?
> > > > -        QEMU_VIRTIO_PCI_QUEUE_MEM_MULT : 4;
> > > > +        qemu_real_host_page_size()  : 4;
> > > >  }
> > > >
> > > >  static int virtio_pci_ioeventfd_assign(DeviceState *d,
> > > > EventNotifier *notifier,
> > > > --
> > > > 2.25.1
Michael S. Tsirkin Feb. 13, 2024, 3:53 p.m. UTC | #5
On Tue, Feb 13, 2024 at 12:37:36PM +0000, Srujana Challa wrote:
> > Subject: Re: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci queue mem
> > multiplier
> > 
> > On Tue, Feb 13, 2024 at 11:50:34AM +0000, Srujana Challa wrote:
> > > > Subject: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci
> > > > queue mem multiplier
> > > >
> > > > External Email
> > > >
> > > > --------------------------------------------------------------------
> > > > -- On Mon, Feb 12, 2024 at 01:22:33PM +0530, Srujana Challa wrote:
> > > > > Currently, virtio_pci_queue_mem_mult function returns 4K when
> > > > > VIRTIO_PCI_FLAG_PAGE_PER_VQ is set. But this is not correct when
> > > > > host has page size as 64K.
> > > > > This patch fixes the same.
> > > > >
> > > > > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > > >
> > > > You can't tweak guest visible values like this without compat
> > > > machinery. It's also going to consume a ton more phys memory - can this
> > break any configs?
> > > > Why is this a problem? Just with vdpa?
> > >
> > > We are observing the issue with vdpa when host has page size of 64K.
> > > We haven't verified any other backends. I think, any backend that uses
> > > below API would fail if host has page size other than 4K right?
> > > And also as per VIRTIO_PCI_FLAG_PAGE_PER_VQ, it should be equal to
> > > page_size right?
> > >
> > > static int virtio_pci_set_host_notifier_mr(DeviceState *d, int n,
> > >                                            MemoryRegion *mr, bool
> > > assign) {
> > >     VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
> > >     int offset;
> > >
> > >     if (n >= VIRTIO_QUEUE_MAX || !virtio_pci_modern(proxy) ||
> > >         virtio_pci_queue_mem_mult(proxy) != memory_region_size(mr)) {
> > >         return -1;
> > >     }
> > 
> > Yes but not everyone uses that right? Plain virtio in software with no tricks
> > doesn't care?
> Yes,  any other better ways to address this issue.?

Add a property that vdpa can set?


> > 
> > 
> > > >
> > > > > ---
> > > > >  hw/virtio/virtio-pci.c | 4 +---
> > > > >  1 file changed, 1 insertion(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index
> > > > > e433879542..028df99991 100644
> > > > > --- a/hw/virtio/virtio-pci.c
> > > > > +++ b/hw/virtio/virtio-pci.c
> > > > > @@ -316,12 +316,10 @@ static bool
> > > > virtio_pci_ioeventfd_enabled(DeviceState *d)
> > > > >      return (proxy->flags & VIRTIO_PCI_FLAG_USE_IOEVENTFD) != 0;
> > > > > }
> > > > >
> > > > > -#define QEMU_VIRTIO_PCI_QUEUE_MEM_MULT 0x1000
> > > > > -
> > > > >  static inline int virtio_pci_queue_mem_mult(struct VirtIOPCIProxy
> > > > > *proxy)  {
> > > > >      return (proxy->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ) ?
> > > > > -        QEMU_VIRTIO_PCI_QUEUE_MEM_MULT : 4;
> > > > > +        qemu_real_host_page_size()  : 4;
> > > > >  }
> > > > >
> > > > >  static int virtio_pci_ioeventfd_assign(DeviceState *d,
> > > > > EventNotifier *notifier,
> > > > > --
> > > > > 2.25.1
Srujana Challa Feb. 19, 2024, 6:15 p.m. UTC | #6
> -----Original Message-----
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Tuesday, February 13, 2024 9:23 PM
> To: Srujana Challa <schalla@marvell.com>
> Cc: qemu-devel@nongnu.org; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>; Jerin Jacob <jerinj@marvell.com>
> Subject: Re: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci queue mem
> multiplier
> 
> On Tue, Feb 13, 2024 at 12:37:36PM +0000, Srujana Challa wrote:
> > > Subject: Re: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci
> > > queue mem multiplier
> > >
> > > On Tue, Feb 13, 2024 at 11:50:34AM +0000, Srujana Challa wrote:
> > > > > Subject: [EXT] Re: [PATCH] virtio-pci: correctly set virtio pci
> > > > > queue mem multiplier
> > > > >
> > > > > External Email
> > > > >
> > > > > ----------------------------------------------------------------
> > > > > ----
> > > > > -- On Mon, Feb 12, 2024 at 01:22:33PM +0530, Srujana Challa wrote:
> > > > > > Currently, virtio_pci_queue_mem_mult function returns 4K when
> > > > > > VIRTIO_PCI_FLAG_PAGE_PER_VQ is set. But this is not correct
> > > > > > when host has page size as 64K.
> > > > > > This patch fixes the same.
> > > > > >
> > > > > > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > > > >
> > > > > You can't tweak guest visible values like this without compat
> > > > > machinery. It's also going to consume a ton more phys memory -
> > > > > can this
> > > break any configs?
> > > > > Why is this a problem? Just with vdpa?
> > > >
> > > > We are observing the issue with vdpa when host has page size of 64K.
> > > > We haven't verified any other backends. I think, any backend that
> > > > uses below API would fail if host has page size other than 4K right?
> > > > And also as per VIRTIO_PCI_FLAG_PAGE_PER_VQ, it should be equal to
> > > > page_size right?
> > > >
> > > > static int virtio_pci_set_host_notifier_mr(DeviceState *d, int n,
> > > >                                            MemoryRegion *mr, bool
> > > > assign) {
> > > >     VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
> > > >     int offset;
> > > >
> > > >     if (n >= VIRTIO_QUEUE_MAX || !virtio_pci_modern(proxy) ||
> > > >         virtio_pci_queue_mem_mult(proxy) != memory_region_size(mr)) {
> > > >         return -1;
> > > >     }
> > >
> > > Yes but not everyone uses that right? Plain virtio in software with
> > > no tricks doesn't care?
> > Yes,  any other better ways to address this issue.?
> 
> Add a property that vdpa can set?
I think, as per VIRTIO_PCI_FLAG_PAGE_PER_VQ flag, it should be equal to
host Page_size. May be for software use, we can introduce one more flag, that
could be irrespective of host page_size?

> 
> 
> > >
> > >
> > > > >
> > > > > > ---
> > > > > >  hw/virtio/virtio-pci.c | 4 +---
> > > > > >  1 file changed, 1 insertion(+), 3 deletions(-)
> > > > > >
> > > > > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> > > > > > index
> > > > > > e433879542..028df99991 100644
> > > > > > --- a/hw/virtio/virtio-pci.c
> > > > > > +++ b/hw/virtio/virtio-pci.c
> > > > > > @@ -316,12 +316,10 @@ static bool
> > > > > virtio_pci_ioeventfd_enabled(DeviceState *d)
> > > > > >      return (proxy->flags & VIRTIO_PCI_FLAG_USE_IOEVENTFD) !=
> > > > > > 0; }
> > > > > >
> > > > > > -#define QEMU_VIRTIO_PCI_QUEUE_MEM_MULT 0x1000
> > > > > > -
> > > > > >  static inline int virtio_pci_queue_mem_mult(struct
> > > > > > VirtIOPCIProxy
> > > > > > *proxy)  {
> > > > > >      return (proxy->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ) ?
> > > > > > -        QEMU_VIRTIO_PCI_QUEUE_MEM_MULT : 4;
> > > > > > +        qemu_real_host_page_size()  : 4;
> > > > > >  }
> > > > > >
> > > > > >  static int virtio_pci_ioeventfd_assign(DeviceState *d,
> > > > > > EventNotifier *notifier,
> > > > > > --
> > > > > > 2.25.1
diff mbox series

Patch

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index e433879542..028df99991 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -316,12 +316,10 @@  static bool virtio_pci_ioeventfd_enabled(DeviceState *d)
     return (proxy->flags & VIRTIO_PCI_FLAG_USE_IOEVENTFD) != 0;
 }
 
-#define QEMU_VIRTIO_PCI_QUEUE_MEM_MULT 0x1000
-
 static inline int virtio_pci_queue_mem_mult(struct VirtIOPCIProxy *proxy)
 {
     return (proxy->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ) ?
-        QEMU_VIRTIO_PCI_QUEUE_MEM_MULT : 4;
+        qemu_real_host_page_size()  : 4;
 }
 
 static int virtio_pci_ioeventfd_assign(DeviceState *d, EventNotifier *notifier,