From patchwork Mon Apr 23 13:51:47 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiang Liu X-Patchwork-Id: 154450 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id C3266B6F9A for ; Mon, 23 Apr 2012 23:57:58 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755577Ab2DWN4y (ORCPT ); Mon, 23 Apr 2012 09:56:54 -0400 Received: from mail-pb0-f46.google.com ([209.85.160.46]:46808 "EHLO mail-pb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755575Ab2DWN4x (ORCPT ); Mon, 23 Apr 2012 09:56:53 -0400 Received: by mail-pb0-f46.google.com with SMTP id un15so3921483pbc.19 for ; Mon, 23 Apr 2012 06:56:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=8hHb7pt1704H4j2dqwm4ceyfomMo/nojktvZ/hHZF2s=; b=n6X2qZfp6qmwUNrYugsb8SGEyeP/Kk/zXN8wZFUQvHtpSXP+MFTfdHno4m8kYxC89J KHzW3bcWSrJtozMkqtzSKWr+fPuyHS+MH4Ay3Cz8gKCuskDHs+rRfaaxiQbSTl4HWnbe c0AEewAe+xeeVo1lg9V+UM/hLiioqZPKgAP4f6WheeKctwcVIC4GK9xx3x8EyEzS8or9 5YPpQgJ2682D2GxZ1uYjlAd+OEYI6iHXiRNv0urRmQ79AQfZtFOdFzsqSe2qLwFyuF4/ 3iU2E15Y9EyzsT8sPY7FIop4xs/TSVLUEMrWZMMy0wLWwfml1+RIPmRw8epdgoKMlyo/ 4M1Q== Received: by 10.68.130.40 with SMTP id ob8mr27712816pbb.84.1335189412924; Mon, 23 Apr 2012 06:56:52 -0700 (PDT) Received: from localhost.localdomain ([221.221.17.121]) by mx.google.com with ESMTPS id q1sm14443839pbp.62.2012.04.23.06.56.42 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 23 Apr 2012 06:56:52 -0700 (PDT) From: Jiang Liu To: Vinod Koul , Dan Williams Cc: Jiang Liu , Keping Chen , "David S. Miller" , Alexey Kuznetsov , James Morris , Hideaki YOSHIFUJI , Patrick McHardy , netdev@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Jiang Liu Subject: [PATCH v1 6/8] dmaengine: enhance network subsystem to support DMA device hotplug Date: Mon, 23 Apr 2012 21:51:47 +0800 Message-Id: <1335189109-4871-7-git-send-email-jiang.liu@huawei.com> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1335189109-4871-1-git-send-email-jiang.liu@huawei.com> References: <1335189109-4871-1-git-send-email-jiang.liu@huawei.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Enhance network subsystem to correctly update DMA channel reference counts, so it won't break DMA device hotplug logic. Signed-off-by: Jiang Liu --- include/net/netdma.h | 26 ++++++++++++++++++++++++++ net/ipv4/tcp.c | 10 +++------- net/ipv4/tcp_input.c | 5 +---- net/ipv4/tcp_ipv4.c | 4 +--- net/ipv6/tcp_ipv6.c | 4 +--- 5 files changed, 32 insertions(+), 17 deletions(-) diff --git a/include/net/netdma.h b/include/net/netdma.h index 8ba8ce2..6d71724 100644 --- a/include/net/netdma.h +++ b/include/net/netdma.h @@ -24,6 +24,32 @@ #include #include +static inline bool +net_dma_capable(void) +{ + struct dma_chan *chan = net_dma_find_channel(); + dma_put_channel(chan); + + return !!chan; +} + +static inline struct dma_chan * +net_dma_get_channel(struct tcp_sock *tp) +{ + if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) + tp->ucopy.dma_chan = net_dma_find_channel(); + return tp->ucopy.dma_chan; +} + +static inline void +net_dma_put_channel(struct tcp_sock *tp) +{ + if (tp->ucopy.dma_chan) { + dma_put_channel(tp->ucopy.dma_chan); + tp->ucopy.dma_chan = NULL; + } +} + int dma_skb_copy_datagram_iovec(struct dma_chan* chan, struct sk_buff *skb, int offset, struct iovec *to, size_t len, struct dma_pinned_list *pinned_list); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 8bb6ade..aea4032 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1451,8 +1451,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, available = TCP_SKB_CB(skb)->seq + skb->len - (*seq); if ((available < target) && (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && - !sysctl_tcp_low_latency && - net_dma_find_channel()) { + !sysctl_tcp_low_latency && net_dma_capable()) { preempt_enable_no_resched(); tp->ucopy.pinned_list = dma_pin_iovec_pages(msg->msg_iov, len); @@ -1666,10 +1665,7 @@ do_prequeue: if (!(flags & MSG_TRUNC)) { #ifdef CONFIG_NET_DMA - if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = net_dma_find_channel(); - - if (tp->ucopy.dma_chan) { + if (net_dma_get_channel(tp)) { tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec( tp->ucopy.dma_chan, skb, offset, msg->msg_iov, used, @@ -1758,7 +1754,7 @@ skip_copy: #ifdef CONFIG_NET_DMA tcp_service_net_dma(sk, true); /* Wait for queue to drain */ - tp->ucopy.dma_chan = NULL; + net_dma_put_channel(tp); if (tp->ucopy.pinned_list) { dma_unpin_iovec_pages(tp->ucopy.pinned_list); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 9944c1d..3878916 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5227,10 +5227,7 @@ static int tcp_dma_try_early_copy(struct sock *sk, struct sk_buff *skb, if (tp->ucopy.wakeup) return 0; - if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = net_dma_find_channel(); - - if (tp->ucopy.dma_chan && skb_csum_unnecessary(skb)) { + if (net_dma_get_channel(tp) && skb_csum_unnecessary(skb)) { dma_cookie = dma_skb_copy_datagram_iovec(tp->ucopy.dma_chan, skb, hlen, diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 0cb86ce..90ea1c0 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1729,9 +1729,7 @@ process: if (!sock_owned_by_user(sk)) { #ifdef CONFIG_NET_DMA struct tcp_sock *tp = tcp_sk(sk); - if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = net_dma_find_channel(); - if (tp->ucopy.dma_chan) + if (net_dma_get_channel(tp)) ret = tcp_v4_do_rcv(sk, skb); else #endif diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 86cfe60..fb81bbd 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1644,9 +1644,7 @@ process: if (!sock_owned_by_user(sk)) { #ifdef CONFIG_NET_DMA struct tcp_sock *tp = tcp_sk(sk); - if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = net_dma_find_channel(); - if (tp->ucopy.dma_chan) + if (net_dma_get_channel(tp)) ret = tcp_v6_do_rcv(sk, skb); else #endif