diff mbox

[net-next,04/11] ixgbevf: Add flag to indicate when rx is in net poll

Message ID 1352815405-751-5-git-send-email-jeffrey.t.kirsher@intel.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Kirsher, Jeffrey T Nov. 13, 2012, 2:03 p.m. UTC
From: Greg Rose <gregory.v.rose@intel.com>

napi_gro_receive shouldn't be called from netpoll context.  Doing
so was causing kernel panics when jumbo frames larger than 2K were set.
Add a flag to check if the Rx ring processing is occurring from interrupt
context or from netpoll context and call netif_rx() if in the polling
context.

Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ixgbevf/ixgbevf.h      | 1 +
 drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 7 ++++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

Comments

David Miller Nov. 13, 2012, 7:20 p.m. UTC | #1
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Date: Tue, 13 Nov 2012 06:03:18 -0800

> From: Greg Rose <gregory.v.rose@intel.com>
> 
> napi_gro_receive shouldn't be called from netpoll context.  Doing
> so was causing kernel panics when jumbo frames larger than 2K were set.
> Add a flag to check if the Rx ring processing is occurring from interrupt
> context or from netpoll context and call netif_rx() if in the polling
> context.
> 
> Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
> Tested-by: Sibai Li <sibai.li@intel.com>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>

This is not a scalable solution.

It is not prudent to have every single driver do a check like
this.  If using GRO receive from netpoll causes problems,
then it's a generic issue rather than a driver specific one.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rose, Gregory V Nov. 13, 2012, 7:25 p.m. UTC | #2
On Tue, 13 Nov 2012 14:20:05 -0500
David Miller <davem@davemloft.net> wrote:

> From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Date: Tue, 13 Nov 2012 06:03:18 -0800
> 
> > From: Greg Rose <gregory.v.rose@intel.com>
> > 
> > napi_gro_receive shouldn't be called from netpoll context.  Doing
> > so was causing kernel panics when jumbo frames larger than 2K were
> > set. Add a flag to check if the Rx ring processing is occurring
> > from interrupt context or from netpoll context and call netif_rx()
> > if in the polling context.
> > 
> > Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
> > Tested-by: Sibai Li <sibai.li@intel.com>
> > Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> 
> This is not a scalable solution.
> 
> It is not prudent to have every single driver do a check like
> this.  If using GRO receive from netpoll causes problems,
> then it's a generic issue rather than a driver specific one.

OK, let me look into this a bit more then.

Thanks,

- Greg
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
index 2323ccd..9faaf54 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
@@ -229,6 +229,7 @@  struct ixgbevf_adapter {
 	 */
 	u32 flags;
 #define IXGBE_FLAG_IN_WATCHDOG_TASK             (u32)(1)
+#define IXGBE_FLAG_IN_NETPOLL                   (u32)(1 << 1)
 
 	/* OS defined structs */
 	struct net_device *netdev;
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index ee5ff0e..00f9698 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -288,7 +288,10 @@  static void ixgbevf_receive_skb(struct ixgbevf_q_vector *q_vector,
 	if (is_vlan && test_bit(tag & VLAN_VID_MASK, adapter->active_vlans))
 		__vlan_hwaccel_put_tag(skb, tag);
 
-	napi_gro_receive(&q_vector->napi, skb);
+	if (!(adapter->flags & IXGBE_FLAG_IN_NETPOLL))
+		napi_gro_receive(&q_vector->napi, skb);
+	else
+		netif_rx(skb);
 }
 
 /**
@@ -550,9 +553,11 @@  static int ixgbevf_poll(struct napi_struct *napi, int budget)
 	else
 		per_ring_budget = budget;
 
+	adapter->flags |= IXGBE_FLAG_IN_NETPOLL;
 	ixgbevf_for_each_ring(ring, q_vector->rx)
 		clean_complete &= ixgbevf_clean_rx_irq(q_vector, ring,
 						       per_ring_budget);
+	adapter->flags &= ~IXGBE_FLAG_IN_NETPOLL;
 
 	/* If all work not completed, return budget and keep polling */
 	if (!clean_complete)