diff mbox

[v3,04/13] netmap: Drop netmap_can_send

Message ID 1432032670-15124-5-git-send-email-famz@redhat.com
State New
Headers show

Commit Message

Fam Zheng May 19, 2015, 10:51 a.m. UTC
This callback is called by main loop before polling s->fd, if it returns
false, the fd will not be polled in this iteration.

This is redundant with checks inside read callback. After this patch,
the data will be copied from s->fd to s->iov when it arrives. If the
device can't receive, it will be queued to incoming_queue, and when the
device status changes, this queue will be flushed.

Also remove the qemu_can_send_packet() check in netmap_send. If it's
true, we are good; if it's false, the qemu_sendv_packet_async would
return 0 and read poll will be disabled until netmap_send_completed is
called.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 net/netmap.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

Comments

Stefan Hajnoczi May 19, 2015, 2:54 p.m. UTC | #1
On Tue, May 19, 2015 at 10:51:01AM +0000, Fam Zheng wrote:
> This callback is called by main loop before polling s->fd, if it returns
> false, the fd will not be polled in this iteration.
> 
> This is redundant with checks inside read callback. After this patch,
> the data will be copied from s->fd to s->iov when it arrives. If the
> device can't receive, it will be queued to incoming_queue, and when the
> device status changes, this queue will be flushed.
> 
> Also remove the qemu_can_send_packet() check in netmap_send. If it's
> true, we are good; if it's false, the qemu_sendv_packet_async would
> return 0 and read poll will be disabled until netmap_send_completed is
> called.

This causes unbounded memory usage in QEMU because
qemu_net_queue_append_iov() does not drop packets when sent_cb != NULL.
Fam Zheng May 25, 2015, 3:51 a.m. UTC | #2
On Tue, 05/19 15:54, Stefan Hajnoczi wrote:
> On Tue, May 19, 2015 at 10:51:01AM +0000, Fam Zheng wrote:
> > This callback is called by main loop before polling s->fd, if it returns
> > false, the fd will not be polled in this iteration.
> > 
> > This is redundant with checks inside read callback. After this patch,
> > the data will be copied from s->fd to s->iov when it arrives. If the
> > device can't receive, it will be queued to incoming_queue, and when the
> > device status changes, this queue will be flushed.
> > 
> > Also remove the qemu_can_send_packet() check in netmap_send. If it's
> > true, we are good; if it's false, the qemu_sendv_packet_async would
> > return 0 and read poll will be disabled until netmap_send_completed is
> > called.
> 
> This causes unbounded memory usage in QEMU because
> qemu_net_queue_append_iov() does not drop packets when sent_cb != NULL.

I think netmap_send will use "netmap_read_poll(s, false)" to stop reading, only
the first packet will be queued. Why is it unbounded?

Fam
Stefan Hajnoczi May 26, 2015, 9:18 a.m. UTC | #3
On Mon, May 25, 2015 at 11:51:23AM +0800, Fam Zheng wrote:
> On Tue, 05/19 15:54, Stefan Hajnoczi wrote:
> > On Tue, May 19, 2015 at 10:51:01AM +0000, Fam Zheng wrote:
> > > This callback is called by main loop before polling s->fd, if it returns
> > > false, the fd will not be polled in this iteration.
> > > 
> > > This is redundant with checks inside read callback. After this patch,
> > > the data will be copied from s->fd to s->iov when it arrives. If the
> > > device can't receive, it will be queued to incoming_queue, and when the
> > > device status changes, this queue will be flushed.
> > > 
> > > Also remove the qemu_can_send_packet() check in netmap_send. If it's
> > > true, we are good; if it's false, the qemu_sendv_packet_async would
> > > return 0 and read poll will be disabled until netmap_send_completed is
> > > called.
> > 
> > This causes unbounded memory usage in QEMU because
> > qemu_net_queue_append_iov() does not drop packets when sent_cb != NULL.
> 
> I think netmap_send will use "netmap_read_poll(s, false)" to stop reading, only
> the first packet will be queued. Why is it unbounded?

I looked again and I agree with you.  It should stop after the first
packet and resume when the peer flushes the queue.

Stefan
Fam Zheng May 27, 2015, 7:24 a.m. UTC | #4
On Tue, 05/26 10:18, Stefan Hajnoczi wrote:
> On Mon, May 25, 2015 at 11:51:23AM +0800, Fam Zheng wrote:
> > On Tue, 05/19 15:54, Stefan Hajnoczi wrote:
> > > On Tue, May 19, 2015 at 10:51:01AM +0000, Fam Zheng wrote:
> > > > This callback is called by main loop before polling s->fd, if it returns
> > > > false, the fd will not be polled in this iteration.
> > > > 
> > > > This is redundant with checks inside read callback. After this patch,
> > > > the data will be copied from s->fd to s->iov when it arrives. If the
> > > > device can't receive, it will be queued to incoming_queue, and when the
> > > > device status changes, this queue will be flushed.
> > > > 
> > > > Also remove the qemu_can_send_packet() check in netmap_send. If it's
> > > > true, we are good; if it's false, the qemu_sendv_packet_async would
> > > > return 0 and read poll will be disabled until netmap_send_completed is
> > > > called.
> > > 
> > > This causes unbounded memory usage in QEMU because
> > > qemu_net_queue_append_iov() does not drop packets when sent_cb != NULL.
> > 
> > I think netmap_send will use "netmap_read_poll(s, false)" to stop reading, only
> > the first packet will be queued. Why is it unbounded?
> 
> I looked again and I agree with you.  It should stop after the first
> packet and resume when the peer flushes the queue.
> 

The other patches (socket and tap) have the same rationale. Are you happy with
this appraoch?

Fam
Stefan Hajnoczi June 2, 2015, 4:24 p.m. UTC | #5
On Wed, May 27, 2015 at 03:24:38PM +0800, Fam Zheng wrote:
> On Tue, 05/26 10:18, Stefan Hajnoczi wrote:
> > On Mon, May 25, 2015 at 11:51:23AM +0800, Fam Zheng wrote:
> > > On Tue, 05/19 15:54, Stefan Hajnoczi wrote:
> > > > On Tue, May 19, 2015 at 10:51:01AM +0000, Fam Zheng wrote:
> > > > > This callback is called by main loop before polling s->fd, if it returns
> > > > > false, the fd will not be polled in this iteration.
> > > > > 
> > > > > This is redundant with checks inside read callback. After this patch,
> > > > > the data will be copied from s->fd to s->iov when it arrives. If the
> > > > > device can't receive, it will be queued to incoming_queue, and when the
> > > > > device status changes, this queue will be flushed.
> > > > > 
> > > > > Also remove the qemu_can_send_packet() check in netmap_send. If it's
> > > > > true, we are good; if it's false, the qemu_sendv_packet_async would
> > > > > return 0 and read poll will be disabled until netmap_send_completed is
> > > > > called.
> > > > 
> > > > This causes unbounded memory usage in QEMU because
> > > > qemu_net_queue_append_iov() does not drop packets when sent_cb != NULL.
> > > 
> > > I think netmap_send will use "netmap_read_poll(s, false)" to stop reading, only
> > > the first packet will be queued. Why is it unbounded?
> > 
> > I looked again and I agree with you.  It should stop after the first
> > packet and resume when the peer flushes the queue.
> > 
> 
> The other patches (socket and tap) have the same rationale. Are you happy with
> this appraoch?

Yes, but I have a question on the tap patch.
diff mbox

Patch

diff --git a/net/netmap.c b/net/netmap.c
index 0c1772b..b3efb5b 100644
--- a/net/netmap.c
+++ b/net/netmap.c
@@ -132,23 +132,13 @@  error:
     return -1;
 }
 
-/* Tell the event-loop if the netmap backend can send packets
-   to the frontend. */
-static int netmap_can_send(void *opaque)
-{
-    NetmapState *s = opaque;
-
-    return qemu_can_send_packet(&s->nc);
-}
-
 static void netmap_send(void *opaque);
 static void netmap_writable(void *opaque);
 
 /* Set the event-loop handlers for the netmap backend. */
 static void netmap_update_fd_handler(NetmapState *s)
 {
-    qemu_set_fd_handler2(s->me.fd,
-                         s->read_poll  ? netmap_can_send : NULL,
+    qemu_set_fd_handler2(s->me.fd, NULL,
                          s->read_poll  ? netmap_send     : NULL,
                          s->write_poll ? netmap_writable : NULL,
                          s);
@@ -317,7 +307,7 @@  static void netmap_send(void *opaque)
 
     /* Keep sending while there are available packets into the netmap
        RX ring and the forwarding path towards the peer is open. */
-    while (!nm_ring_empty(ring) && qemu_can_send_packet(&s->nc)) {
+    while (!nm_ring_empty(ring)) {
         uint32_t i;
         uint32_t idx;
         bool morefrag;