From patchwork Tue Oct 21 14:16:55 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sowmini Varadhan X-Patchwork-Id: 401518 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3753B14001A for ; Wed, 22 Oct 2014 01:17:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932990AbaJUOQ7 (ORCPT ); Tue, 21 Oct 2014 10:16:59 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:20743 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932796AbaJUOQ6 (ORCPT ); Tue, 21 Oct 2014 10:16:58 -0400 Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id s9LEGvLw031788 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 21 Oct 2014 14:16:57 GMT Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86]) by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id s9LEGuA0023603 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 21 Oct 2014 14:16:56 GMT Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17]) by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id s9LEGtSq008423; Tue, 21 Oct 2014 14:16:55 GMT Received: from oracle.com (/10.154.137.102) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 21 Oct 2014 07:16:54 -0700 Date: Tue, 21 Oct 2014 10:16:55 -0400 From: Sowmini Varadhan To: davem@davemloft.net, sowmini.varadhan@oracle.com Cc: netdev@vger.kernel.org Subject: [PATCHv4 net-next 4/4] sunvnet: Remove irqsave/irqrestore on vio.lock Message-ID: <20141021141655.GG15405@oracle.com> MIME-Version: 1.0 Content-Disposition: inline Received: from oracle.com (/10.154.149.247) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 15 Oct 2014 11:03:14 -0700 User-Agent: Mutt/1.5.21 (2010-09-15) X-Source-IP: acsinet22.oracle.com [141.146.126.238] Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org After the NAPIfication of sunvnet, we no longer need to synchronize by doing irqsave/restore on vio.lock in the I/O fastpath. NAPI ->poll() is non-reentrant, so all RX processing occurs strictly in a serialized environment. TX reclaim is done in NAPI context, so the netif_tx_lock can be used to serialize critical sections between Tx and Rx paths. Signed-off-by: Sowmini Varadhan --- drivers/net/ethernet/sun/sunvnet.c | 30 +++++------------------------- 1 file changed, 5 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c index 055061d..c1c5820 100644 --- a/drivers/net/ethernet/sun/sunvnet.c +++ b/drivers/net/ethernet/sun/sunvnet.c @@ -838,18 +838,6 @@ struct vnet_port *__tx_port_find(struct vnet *vp, struct sk_buff *skb) return NULL; } -struct vnet_port *tx_port_find(struct vnet *vp, struct sk_buff *skb) -{ - struct vnet_port *ret; - unsigned long flags; - - spin_lock_irqsave(&vp->lock, flags); - ret = __tx_port_find(vp, skb); - spin_unlock_irqrestore(&vp->lock, flags); - - return ret; -} - static struct sk_buff *vnet_clean_tx_ring(struct vnet_port *port, unsigned *pending) { @@ -910,11 +898,10 @@ static void vnet_clean_timer_expire(unsigned long port0) struct vnet_port *port = (struct vnet_port *)port0; struct sk_buff *freeskbs; unsigned pending; - unsigned long flags; - spin_lock_irqsave(&port->vio.lock, flags); + netif_tx_lock(port->vp->dev); freeskbs = vnet_clean_tx_ring(port, &pending); - spin_unlock_irqrestore(&port->vio.lock, flags); + netif_tx_unlock(port->vp->dev); vnet_free_skbs(freeskbs); @@ -967,7 +954,6 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev) struct vnet_port *port = NULL; struct vio_dring_state *dr; struct vio_net_desc *d; - unsigned long flags; unsigned int len; struct sk_buff *freeskbs = NULL; int i, err, txi; @@ -980,7 +966,7 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev) goto out_dropped; rcu_read_lock(); - port = tx_port_find(vp, skb); + port = __tx_port_find(vp, skb); if (unlikely(!port)) goto out_dropped; @@ -1017,8 +1003,6 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev) goto out_dropped; } - spin_lock_irqsave(&port->vio.lock, flags); - dr = &port->vio.drings[VIO_DRIVER_TX_RING]; if (unlikely(vnet_tx_dring_avail(dr) < 2)) { if (!netif_queue_stopped(dev)) { @@ -1052,7 +1036,7 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev) (LDC_MAP_SHADOW | LDC_MAP_DIRECT | LDC_MAP_RW)); if (err < 0) { netdev_info(dev, "tx buffer map error %d\n", err); - goto out_dropped_unlock; + goto out_dropped; } port->tx_bufs[txi].ncookies = err; @@ -1105,7 +1089,7 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev) netdev_info(dev, "TX trigger error %d\n", err); d->hdr.state = VIO_DESC_FREE; dev->stats.tx_carrier_errors++; - goto out_dropped_unlock; + goto out_dropped; } ldc_start_done: @@ -1121,7 +1105,6 @@ ldc_start_done: netif_wake_queue(dev); } - spin_unlock_irqrestore(&port->vio.lock, flags); (void)mod_timer(&port->clean_timer, jiffies + VNET_CLEAN_TIMEOUT); rcu_read_unlock(); @@ -1129,9 +1112,6 @@ ldc_start_done: return NETDEV_TX_OK; -out_dropped_unlock: - spin_unlock_irqrestore(&port->vio.lock, flags); - out_dropped: if (pending) (void)mod_timer(&port->clean_timer,