Message ID | 1488997457-24554-1-git-send-email-subashab@codeaurora.org |
---|---|
State | Superseded, archived |
Delegated to: | David Miller |
Headers | show |
On Wed, 2017-03-08 at 11:24 -0700, Subash Abhinov Kasiviswanathan wrote: > While running a single stream UDPv6 test, we observed that amount > of CPU spent in NET_RX softirq was much greater than UDPv4 for an > equivalent receive rate. The test here was run on an ARM64 based > Android system. On further analysis with perf, we found that UDPv6 > was spending significant time in the statistics netfilter targets > which did socket lookup per packet. These statistics rules perform > a lookup when there is no socket associated with the skb. Since > there are multiple instances of these rules based on UID, there > will be equal number of lookups per skb. > > By introducing early demux for UDPv6, we avoid the redundant lookups. > This also helped to improve the performance (800Mbps -> 870Mbps) on a > CPU limited system in a single stream UDPv6 receive test with 1450 > byte sized datagrams using iperf. Well, this 'optimization' actually hurts when UDP sockets are not connected, since this adds an extra cache line miss per incoming packet. (DNS servers for example) > > Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> > --- > + > + if (dst) > + dst = dst_check(dst, 0); IPv6 uses a cookie to validate dst, not 0.
On 2017-03-08 11:40, Eric Dumazet wrote: > Well, this 'optimization' actually hurts when UDP sockets are not > connected, since this adds an extra cache line miss per incoming > packet. > > (DNS servers for example) Hi Eric Thanks for your comments. Would it be preferable to disable early demux for the servers with large unconnected workloads in that case? > >> >> Signed-off-by: Subash Abhinov Kasiviswanathan >> <subashab@codeaurora.org> >> --- > >> + >> + if (dst) >> + dst = dst_check(dst, 0); > > > IPv6 uses a cookie to validate dst, not 0. I'll update this and send v2. -- Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
On Wed, 2017-03-08 at 12:11 -0700, Subash Abhinov Kasiviswanathan wrote: > On 2017-03-08 11:40, Eric Dumazet wrote: > > Well, this 'optimization' actually hurts when UDP sockets are not > > connected, since this adds an extra cache line miss per incoming > > packet. > > > > (DNS servers for example) > > Hi Eric > > Thanks for your comments. Would it be preferable to disable early demux > for the > servers with large unconnected workloads in that case? Well, many servers handle both TCP and UDP. For TCP, there is no question about early demux, this is definitely a win. We probably should have one sysctl to enable TCP early demux, one for UDP early demux.
On Wed, Mar 08, 2017 at 11:22:01AM -0800, Eric Dumazet wrote: > On Wed, 2017-03-08 at 12:11 -0700, Subash Abhinov Kasiviswanathan wrote: > > On 2017-03-08 11:40, Eric Dumazet wrote: > > > Well, this 'optimization' actually hurts when UDP sockets are not > > > connected, since this adds an extra cache line miss per incoming > > > packet. > > > > > > (DNS servers for example) > > > > Hi Eric > > > > Thanks for your comments. Would it be preferable to disable early demux > > for the > > servers with large unconnected workloads in that case? > > Well, many servers handle both TCP and UDP. > > For TCP, there is no question about early demux, this is definitely a > win. > > We probably should have one sysctl to enable TCP early demux, one for > UDP early demux. If early demux is a clear win for TCP then I wonder if it is unnecessary and by some leap also undesirable to have a configuration knob for that case.
From: Simon Horman <simon.horman@netronome.com> Date: Tue, 18 Apr 2017 17:09:04 +0900 > On Wed, Mar 08, 2017 at 11:22:01AM -0800, Eric Dumazet wrote: >> On Wed, 2017-03-08 at 12:11 -0700, Subash Abhinov Kasiviswanathan wrote: >> > On 2017-03-08 11:40, Eric Dumazet wrote: >> > > Well, this 'optimization' actually hurts when UDP sockets are not >> > > connected, since this adds an extra cache line miss per incoming >> > > packet. >> > > >> > > (DNS servers for example) >> > >> > Hi Eric >> > >> > Thanks for your comments. Would it be preferable to disable early demux >> > for the >> > servers with large unconnected workloads in that case? >> >> Well, many servers handle both TCP and UDP. >> >> For TCP, there is no question about early demux, this is definitely a >> win. >> >> We probably should have one sysctl to enable TCP early demux, one for >> UDP early demux. > > If early demux is a clear win for TCP then I wonder if it is > unnecessary and by some leap also undesirable to have a configuration > knob for that case. For forwarding workloads it is pure overhead since the early demux will never find a local socket, and therefore it is wasted work.
On Tue, Apr 18, 2017, at 17:16, David Miller wrote: > From: Simon Horman <simon.horman@netronome.com> > Date: Tue, 18 Apr 2017 17:09:04 +0900 > > > On Wed, Mar 08, 2017 at 11:22:01AM -0800, Eric Dumazet wrote: > >> On Wed, 2017-03-08 at 12:11 -0700, Subash Abhinov Kasiviswanathan wrote: > >> > On 2017-03-08 11:40, Eric Dumazet wrote: > >> > > Well, this 'optimization' actually hurts when UDP sockets are not > >> > > connected, since this adds an extra cache line miss per incoming > >> > > packet. > >> > > > >> > > (DNS servers for example) > >> > > >> > Hi Eric > >> > > >> > Thanks for your comments. Would it be preferable to disable early demux > >> > for the > >> > servers with large unconnected workloads in that case? > >> > >> Well, many servers handle both TCP and UDP. > >> > >> For TCP, there is no question about early demux, this is definitely a > >> win. > >> > >> We probably should have one sysctl to enable TCP early demux, one for > >> UDP early demux. > > > > If early demux is a clear win for TCP then I wonder if it is > > unnecessary and by some leap also undesirable to have a configuration > > knob for that case. > > For forwarding workloads it is pure overhead since the early demux will > never find a local socket, and therefore it is wasted work. Also for some more complicated fib rules setups the early demux logic could end up causing doing wrong lookups, because some route might not be actually local in one fib rule but it is in another one. Bye, Hannes
On Tue, Apr 18, 2017 at 08:09:08PM +0200, Hannes Frederic Sowa wrote: > > > On Tue, Apr 18, 2017, at 17:16, David Miller wrote: > > From: Simon Horman <simon.horman@netronome.com> > > Date: Tue, 18 Apr 2017 17:09:04 +0900 > > > > > On Wed, Mar 08, 2017 at 11:22:01AM -0800, Eric Dumazet wrote: > > >> On Wed, 2017-03-08 at 12:11 -0700, Subash Abhinov Kasiviswanathan wrote: > > >> > On 2017-03-08 11:40, Eric Dumazet wrote: > > >> > > Well, this 'optimization' actually hurts when UDP sockets are not > > >> > > connected, since this adds an extra cache line miss per incoming > > >> > > packet. > > >> > > > > >> > > (DNS servers for example) > > >> > > > >> > Hi Eric > > >> > > > >> > Thanks for your comments. Would it be preferable to disable early demux > > >> > for the > > >> > servers with large unconnected workloads in that case? > > >> > > >> Well, many servers handle both TCP and UDP. > > >> > > >> For TCP, there is no question about early demux, this is definitely a > > >> win. > > >> > > >> We probably should have one sysctl to enable TCP early demux, one for > > >> UDP early demux. > > > > > > If early demux is a clear win for TCP then I wonder if it is > > > unnecessary and by some leap also undesirable to have a configuration > > > knob for that case. > > > > For forwarding workloads it is pure overhead since the early demux will > > never find a local socket, and therefore it is wasted work. > > Also for some more complicated fib rules setups the early demux logic > could end up causing doing wrong lookups, because some route might not > be actually local in one fib rule but it is in another one. Thanks for the clarification. Knobs for both TCP and UDP now make perfect sense to me.
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 221825a..76501ce 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -851,6 +851,65 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, return 0; } +static struct sock *__udp6_lib_demux_lookup(struct net *net, + __be16 loc_port, const struct in6_addr *loc_addr, + __be16 rmt_port, const struct in6_addr *rmt_addr, + int dif) +{ + struct sock *sk; + + rcu_read_lock(); + sk = __udp6_lib_lookup(net, rmt_addr, rmt_port, loc_addr, loc_port, + dif, &udp_table, NULL); + if (sk && !atomic_inc_not_zero(&sk->sk_refcnt)) + sk = NULL; + rcu_read_unlock(); + + return sk; +} + +static void udp_v6_early_demux(struct sk_buff *skb) +{ + struct net *net = dev_net(skb->dev); + const struct udphdr *uh; + struct sock *sk; + struct dst_entry *dst; + int dif = skb->dev->ifindex; + + if (!pskb_may_pull(skb, skb_transport_offset(skb) + + sizeof(struct udphdr))) + return; + + uh = udp_hdr(skb); + + if (skb->pkt_type == PACKET_HOST) + sk = __udp6_lib_demux_lookup(net, uh->dest, + &ipv6_hdr(skb)->daddr, + uh->source, &ipv6_hdr(skb)->saddr, + dif); + else + return; + + if (!sk) + return; + + skb->sk = sk; + + skb->destructor = sock_efree; + dst = READ_ONCE(sk->sk_rx_dst); + + if (dst) + dst = dst_check(dst, 0); + if (dst) { + if (dst->flags & DST_NOCACHE) { + if (likely(atomic_inc_not_zero(&dst->__refcnt))) + skb_dst_set(skb, dst); + } else { + skb_dst_set_noref(skb, dst); + } + } +} + static __inline__ int udpv6_rcv(struct sk_buff *skb) { return __udp6_lib_rcv(skb, &udp_table, IPPROTO_UDP); @@ -1365,6 +1424,7 @@ int compat_udpv6_getsockopt(struct sock *sk, int level, int optname, #endif static const struct inet6_protocol udpv6_protocol = { + .early_demux = udp_v6_early_demux, .handler = udpv6_rcv, .err_handler = udpv6_err, .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
While running a single stream UDPv6 test, we observed that amount of CPU spent in NET_RX softirq was much greater than UDPv4 for an equivalent receive rate. The test here was run on an ARM64 based Android system. On further analysis with perf, we found that UDPv6 was spending significant time in the statistics netfilter targets which did socket lookup per packet. These statistics rules perform a lookup when there is no socket associated with the skb. Since there are multiple instances of these rules based on UID, there will be equal number of lookups per skb. By introducing early demux for UDPv6, we avoid the redundant lookups. This also helped to improve the performance (800Mbps -> 870Mbps) on a CPU limited system in a single stream UDPv6 receive test with 1450 byte sized datagrams using iperf. Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> --- net/ipv6/udp.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+)