diff mbox

xen: Drop net_rx_ok

Message ID 1435632157-9275-1-git-send-email-famz@redhat.com
State New
Headers show

Commit Message

Fam Zheng June 30, 2015, 2:42 a.m. UTC
This is necessary because once we return false from .can_receive, we
need to flush the queue when the .can_receive conditions become true
again, (for example when more buffer is available).

We can rely on net_rx_packet (which checks the same conditions) to drop
the packet if the device is not ready, so drop net_xen_info.can_receive.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 hw/net/xen_nic.c | 22 ----------------------
 1 file changed, 22 deletions(-)

Comments

Stefan Hajnoczi July 2, 2015, 12:39 p.m. UTC | #1
On Tue, Jun 30, 2015 at 10:42:37AM +0800, Fam Zheng wrote:
> This is necessary because once we return false from .can_receive, we
> need to flush the queue when the .can_receive conditions become true
> again, (for example when more buffer is available).
> 
> We can rely on net_rx_packet (which checks the same conditions) to drop
> the packet if the device is not ready, so drop net_xen_info.can_receive.

This patch changes behavior:

Previously can_receive() false meant packets are queued.

Now those same conditions result in net_rx_packet() returning -1, so
packets are discarded.

In order to keep the spirit of the queuing mechanism - where we tell a
sender to hold off until more rx buffers become available - I think the
following line in net_rx_packet() needs to be changed:

  if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) {
      xen_be_printf(&netdev->xendev, 2, "no buffer, drop packet\n");
      return -1;  <-- this should be changed to return 0
  }

That change assumes that net_event()'s flush is always called when the
rx ring gets more free space.

Any thoughts from Xen folks?
Stefan Hajnoczi July 20, 2015, 5:12 p.m. UTC | #2
On Thu, Jul 02, 2015 at 01:39:16PM +0100, Stefan Hajnoczi wrote:
> On Tue, Jun 30, 2015 at 10:42:37AM +0800, Fam Zheng wrote:
> > This is necessary because once we return false from .can_receive, we
> > need to flush the queue when the .can_receive conditions become true
> > again, (for example when more buffer is available).
> > 
> > We can rely on net_rx_packet (which checks the same conditions) to drop
> > the packet if the device is not ready, so drop net_xen_info.can_receive.
> 
> This patch changes behavior:
> 
> Previously can_receive() false meant packets are queued.
> 
> Now those same conditions result in net_rx_packet() returning -1, so
> packets are discarded.
> 
> In order to keep the spirit of the queuing mechanism - where we tell a
> sender to hold off until more rx buffers become available - I think the
> following line in net_rx_packet() needs to be changed:
> 
>   if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) {
>       xen_be_printf(&netdev->xendev, 2, "no buffer, drop packet\n");
>       return -1;  <-- this should be changed to return 0
>   }
> 
> That change assumes that net_event()'s flush is always called when the
> rx ring gets more free space.
> 
> Any thoughts from Xen folks?

Ping?
Stefan Hajnoczi July 27, 2015, 1:16 p.m. UTC | #3
On Mon, Jul 20, 2015 at 06:12:09PM +0100, Stefan Hajnoczi wrote:
> On Thu, Jul 02, 2015 at 01:39:16PM +0100, Stefan Hajnoczi wrote:
> > On Tue, Jun 30, 2015 at 10:42:37AM +0800, Fam Zheng wrote:
> > > This is necessary because once we return false from .can_receive, we
> > > need to flush the queue when the .can_receive conditions become true
> > > again, (for example when more buffer is available).
> > > 
> > > We can rely on net_rx_packet (which checks the same conditions) to drop
> > > the packet if the device is not ready, so drop net_xen_info.can_receive.
> > 
> > This patch changes behavior:
> > 
> > Previously can_receive() false meant packets are queued.
> > 
> > Now those same conditions result in net_rx_packet() returning -1, so
> > packets are discarded.
> > 
> > In order to keep the spirit of the queuing mechanism - where we tell a
> > sender to hold off until more rx buffers become available - I think the
> > following line in net_rx_packet() needs to be changed:
> > 
> >   if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) {
> >       xen_be_printf(&netdev->xendev, 2, "no buffer, drop packet\n");
> >       return -1;  <-- this should be changed to return 0
> >   }
> > 
> > That change assumes that net_event()'s flush is always called when the
> > rx ring gets more free space.
> > 
> > Any thoughts from Xen folks?
> 
> Ping?

Need input from Xen developers.

Ping?
Chen Gang July 27, 2015, 10:05 p.m. UTC | #4
On 7/27/15 21:16, Stefan Hajnoczi wrote:
> On Mon, Jul 20, 2015 at 06:12:09PM +0100, Stefan Hajnoczi wrote:
>> On Thu, Jul 02, 2015 at 01:39:16PM +0100, Stefan Hajnoczi wrote:
>>> On Tue, Jun 30, 2015 at 10:42:37AM +0800, Fam Zheng wrote:
>>>> This is necessary because once we return false from .can_receive, we
>>>> need to flush the queue when the .can_receive conditions become true
>>>> again, (for example when more buffer is available).
>>>>
>>>> We can rely on net_rx_packet (which checks the same conditions) to drop
>>>> the packet if the device is not ready, so drop net_xen_info.can_receive.
>>>
>>> This patch changes behavior:
>>>
>>> Previously can_receive() false meant packets are queued.
>>>
>>> Now those same conditions result in net_rx_packet() returning -1, so
>>> packets are discarded.
>>>
>>> In order to keep the spirit of the queuing mechanism - where we tell a
>>> sender to hold off until more rx buffers become available - I think the
>>> following line in net_rx_packet() needs to be changed:
>>>
>>>   if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) {
>>>       xen_be_printf(&netdev->xendev, 2, "no buffer, drop packet\n");
>>>       return -1;  <-- this should be changed to return 0
>>>   }
>>>
>>> That change assumes that net_event()'s flush is always called when the
>>> rx ring gets more free space.
>>>
>>> Any thoughts from Xen folks?
>>
>> Ping?
> 
> Need input from Xen developers.
> 
> Ping?
> 

In fact, I am not Xen folks. But I shall provide my idea for it, since
Cc to me.

For me, it is reasonable to use "return 0" instead of "return -1".

 - It completes the current API and keep compatible with the original
   usage. As return size, it needs to consider about all numbers (zero,
   negative, and positive numbers, like e.g. standard read/write API).

 - Comparing with all the other NetClientInfo.receive providers, some of
   them consider about it (e.g. virtio), others not (maybe each of them
   have their own reasons -- e.g. hardware will never generate 0 size).

Welcome any other members' ideas, suggestions and completions.


Thanks.
Stefano Stabellini July 29, 2015, 9:28 a.m. UTC | #5
On Mon, 27 Jul 2015, Stefan Hajnoczi wrote:
> On Mon, Jul 20, 2015 at 06:12:09PM +0100, Stefan Hajnoczi wrote:
> > On Thu, Jul 02, 2015 at 01:39:16PM +0100, Stefan Hajnoczi wrote:
> > > On Tue, Jun 30, 2015 at 10:42:37AM +0800, Fam Zheng wrote:
> > > > This is necessary because once we return false from .can_receive, we
> > > > need to flush the queue when the .can_receive conditions become true
> > > > again, (for example when more buffer is available).
> > > > 
> > > > We can rely on net_rx_packet (which checks the same conditions) to drop
> > > > the packet if the device is not ready, so drop net_xen_info.can_receive.
> > > 
> > > This patch changes behavior:
> > > 
> > > Previously can_receive() false meant packets are queued.
> > > 
> > > Now those same conditions result in net_rx_packet() returning -1, so
> > > packets are discarded.
> > > 
> > > In order to keep the spirit of the queuing mechanism - where we tell a
> > > sender to hold off until more rx buffers become available - I think the
> > > following line in net_rx_packet() needs to be changed:
> > > 
> > >   if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) {
> > >       xen_be_printf(&netdev->xendev, 2, "no buffer, drop packet\n");
> > >       return -1;  <-- this should be changed to return 0
> > >   }
> > > 
> > > That change assumes that net_event()'s flush is always called when the
> > > rx ring gets more free space.
> > > 
> > > Any thoughts from Xen folks?
> > 
> > Ping?
> 
> Need input from Xen developers.

FYI I don't have an opinion on this: I am not familiar with this code as
xen_nic is the one Xen PV backend in QEMU that is not used with Xen.
Theoretically it could be used, but actually there is no way to select
it from the toolstack.
diff mbox

Patch

diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c
index 19ecfc4..da9edc0 100644
--- a/hw/net/xen_nic.c
+++ b/hw/net/xen_nic.c
@@ -234,27 +234,6 @@  static void net_rx_response(struct XenNetDev *netdev,
 
 #define NET_IP_ALIGN 2
 
-static int net_rx_ok(NetClientState *nc)
-{
-    struct XenNetDev *netdev = qemu_get_nic_opaque(nc);
-    RING_IDX rc, rp;
-
-    if (netdev->xendev.be_state != XenbusStateConnected) {
-        return 0;
-    }
-
-    rc = netdev->rx_ring.req_cons;
-    rp = netdev->rx_ring.sring->req_prod;
-    xen_rmb();
-
-    if (rc == rp || RING_REQUEST_CONS_OVERFLOW(&netdev->rx_ring, rc)) {
-        xen_be_printf(&netdev->xendev, 2, "%s: no rx buffers (%d/%d)\n",
-                      __FUNCTION__, rc, rp);
-        return 0;
-    }
-    return 1;
-}
-
 static ssize_t net_rx_packet(NetClientState *nc, const uint8_t *buf, size_t size)
 {
     struct XenNetDev *netdev = qemu_get_nic_opaque(nc);
@@ -304,7 +283,6 @@  static ssize_t net_rx_packet(NetClientState *nc, const uint8_t *buf, size_t size
 static NetClientInfo net_xen_info = {
     .type = NET_CLIENT_OPTIONS_KIND_NIC,
     .size = sizeof(NICState),
-    .can_receive = net_rx_ok,
     .receive = net_rx_packet,
 };