From patchwork Fri Feb 3 03:14:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 723358 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3vF26V04mjz9s74 for ; Fri, 3 Feb 2017 14:14:58 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="CTkQL0Dh"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752163AbdBCDO4 (ORCPT ); Thu, 2 Feb 2017 22:14:56 -0500 Received: from mail-pg0-f66.google.com ([74.125.83.66]:36755 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752009AbdBCDOz (ORCPT ); Thu, 2 Feb 2017 22:14:55 -0500 Received: by mail-pg0-f66.google.com with SMTP id 75so699437pgf.3 for ; Thu, 02 Feb 2017 19:14:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:subject:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=8gEiM1L0QygCTWN9MGCNtAEa/Fdy1BmmqT47wDTPKRg=; b=CTkQL0DhugAOOLoGtUYFzLPjrbqeKY7/WwuBaoFccseBfdBfLnjPqlMOHYM0+sy8BJ e/H5t8Kh0VQ/XMC6NG9wZ4dbkkZe6PaKrwVx12ujaU+mMwYav75qQZd1cFthzcJ7pI6S /FagjKtCTVDeX+PVA/TlbZx8gQOUH5F0SQncrsnKQc7mU1CGPZHf/Xzyr6DAKPEQ8Ne8 bZPOXQwE/AEoShh+kbxQdv8chRk9p4OKdJui5td028OGyRMk2srgAqbbg9KygSgH5JZP RCcGDstnqvj59TrNWCpP1ndZNnWBVWRLc0AQZkVecvo4JP094Ni/p9i5+OtLF6NRUw1+ qkbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=8gEiM1L0QygCTWN9MGCNtAEa/Fdy1BmmqT47wDTPKRg=; b=ciuPO2fcnPC7x6w7T74MUpM/lyfqOrzalLs+I1/s2f9e+x0u81GDFLEKFVk6AuurN3 oQqifi6uAOF1Kyo0sThiWtEUGZwWL0CzOSCx9DLB5HJK/fWHJPD2gExov7CbEedlkzX1 qbTHcyrwBEY3MT4MK6tvIgtszO/M5vouGLf3iBrfgnbHS9t330VlwjQfHpwz+Lu/j5tI nppPNUa5OyqwR/Y58JqPLmvyxgFyCP809GJKQjoC+0ZPMpkoqAd9oPLF3VrRnKqMZeNN 5w3gwP08+WaNrUe5jETeupULHyUiqV4myNuD3p1+De3BAMfUSHzuqHrhqD7/agLZHzo1 1H1g== X-Gm-Message-State: AIkVDXJFKwPYMCBE5tDmAPbC0KY0iqrRHSEBWHdeqJxVzWn77lzfVb0ivtpZ42z+I5k/Eg== X-Received: by 10.84.213.9 with SMTP id f9mr17767868pli.138.1486091694491; Thu, 02 Feb 2017 19:14:54 -0800 (PST) Received: from [127.0.1.1] ([72.168.144.30]) by smtp.gmail.com with ESMTPSA id x2sm61993237pfa.71.2017.02.02.19.14.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Feb 2017 19:14:53 -0800 (PST) From: John Fastabend X-Google-Original-From: John Fastabend Subject: [net-next PATCH v2 1/5] virtio_net: wrap rtnl_lock in test for calling with lock already held To: kubakici@wp.pl, jasowang@redhat.com, ast@fb.com, mst@redhat.com Cc: john.r.fastabend@intel.com, netdev@vger.kernel.org, john.fastabend@gmail.com Date: Thu, 02 Feb 2017 19:14:32 -0800 Message-ID: <20170203031432.23054.15091.stgit@john-Precision-Tower-5810> In-Reply-To: <20170203031251.23054.25387.stgit@john-Precision-Tower-5810> References: <20170203031251.23054.25387.stgit@john-Precision-Tower-5810> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org For XDP use case and to allow ethtool reset tests it is useful to be able to use reset paths from contexts where rtnl lock is already held. This requries updating virtnet_set_queues and free_receive_bufs the two places where rtnl_lock is taken in virtio_net. To do this we use the following pattern, _foo(...) { do stuff } foo(...) { rtnl_lock(); _foo(...); rtnl_unlock()}; this allows us to use freeze()/restore() flow from both contexts. Signed-off-by: John Fastabend Acked-by: Jason Wang --- drivers/net/virtio_net.c | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index bd22cf3..f8ba586 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1342,7 +1342,7 @@ static void virtnet_ack_link_announce(struct virtnet_info *vi) rtnl_unlock(); } -static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) +static int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) { struct scatterlist sg; struct net_device *dev = vi->dev; @@ -1368,6 +1368,16 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) return 0; } +static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) +{ + int err; + + rtnl_lock(); + err = _virtnet_set_queues(vi, queue_pairs); + rtnl_unlock(); + return err; +} + static int virtnet_close(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); @@ -1620,7 +1630,7 @@ static int virtnet_set_channels(struct net_device *dev, return -EINVAL; get_online_cpus(); - err = virtnet_set_queues(vi, queue_pairs); + err = _virtnet_set_queues(vi, queue_pairs); if (!err) { netif_set_real_num_tx_queues(dev, queue_pairs); netif_set_real_num_rx_queues(dev, queue_pairs); @@ -1752,7 +1762,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog) return -ENOMEM; } - err = virtnet_set_queues(vi, curr_qp + xdp_qp); + err = _virtnet_set_queues(vi, curr_qp + xdp_qp); if (err) { dev_warn(&dev->dev, "XDP Device queue allocation failure.\n"); return err; @@ -1761,7 +1771,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog) if (prog) { prog = bpf_prog_add(prog, vi->max_queue_pairs - 1); if (IS_ERR(prog)) { - virtnet_set_queues(vi, curr_qp); + _virtnet_set_queues(vi, curr_qp); return PTR_ERR(prog); } } @@ -1880,12 +1890,11 @@ static void virtnet_free_queues(struct virtnet_info *vi) kfree(vi->sq); } -static void free_receive_bufs(struct virtnet_info *vi) +static void _free_receive_bufs(struct virtnet_info *vi) { struct bpf_prog *old_prog; int i; - rtnl_lock(); for (i = 0; i < vi->max_queue_pairs; i++) { while (vi->rq[i].pages) __free_pages(get_a_page(&vi->rq[i], GFP_KERNEL), 0); @@ -1895,6 +1904,12 @@ static void free_receive_bufs(struct virtnet_info *vi) if (old_prog) bpf_prog_put(old_prog); } +} + +static void free_receive_bufs(struct virtnet_info *vi) +{ + rtnl_lock(); + _free_receive_bufs(vi); rtnl_unlock(); } @@ -2333,9 +2348,7 @@ static int virtnet_probe(struct virtio_device *vdev) goto free_unregister_netdev; } - rtnl_lock(); virtnet_set_queues(vi, vi->curr_queue_pairs); - rtnl_unlock(); /* Assume link up if device can't report link status, otherwise get link status from config. */ @@ -2444,9 +2457,7 @@ static int virtnet_restore(struct virtio_device *vdev) netif_device_attach(vi->dev); - rtnl_lock(); virtnet_set_queues(vi, vi->curr_queue_pairs); - rtnl_unlock(); err = virtnet_cpu_notif_add(vi); if (err)