diff mbox

[v2] rfs: Receive Flow Steering

Message ID 1270559096.2081.35.camel@edumazet-laptop
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Eric Dumazet April 6, 2010, 1:04 p.m. UTC
Le lundi 05 avril 2010 à 22:56 -0700, Tom Herbert a écrit :
> Version 2:
> - added a u16 filler to pad rps_dev_flow structure
> - define RPS_NO_CPU as 0xffff
> - add inet_rps_save_rxhash helper function to copy skb's rxhash into inet_sk
> - add a "voidflow" which can be used get_rps_cpu does not return a flow (avoids some conditionals)
> - use raw_smp_processor_id in rps_record_sock_flow, this is no requirement to pr
> event preemption
> ---
> This patch implements software receive side packet steering (RPS).  RPS
> distributes the load of received packet processing across multiple CPUs.
> 
> Problem statement: Protocol processing done in the NAPI context for received
> packets is serialized per device queue and becomes a bottleneck under high
> packet load.  This substantially limits pps that can be achieved on a single
> queue NIC and provides no scaling with multiple cores.
> 
> This solution queues packets early on in the receive path on the backlog queues
> of other CPUs.   This allows protocol processing (e.g. IP and TCP) to be
> performed on packets in parallel.   For each device (or each receive queue in
> a multi-queue device) a mask of CPUs is set to indicate the CPUs that can
> process packets. A CPU is selected on a per packet basis by hashing contents
> of the packet header (e.g. the TCP or UDP 4-tuple) and using the result to index
> into the CPU mask.  The IPI mechanism is used to raise networking receive
> softirqs between CPUs.  This effectively emulates in software what a multi-queue
> NIC can provide, but is generic requiring no device support.
> 
> Many devices now provide a hash over the 4-tuple on a per packet basis
> (e.g. the Toeplitz hash).  This patch allow drivers to set the HW reported hash
> in an skb field, and that value in turn is used to index into the RPS maps.
> Using the HW generated hash can avoid cache misses on the packet when
> steering it to a remote CPU.
> 
> The CPU mask is set on a per device and per queue basis in the sysfs variable
> /sys/class/net/<device>/queues/rx-<n>/rps_cpus.  This is a set of canonical
> bit maps for receive queues in the device (numbered by <n>).  If a device
> does not support multi-queue, a single variable is used for the device (rx-0).
> 
> Generally, we have found this technique increases pps capabilities of a single
> queue device with good CPU utilization.  Optimal settings for the CPU mask
> seem to depend on architectures and cache hierarcy.  Below are some results
> running 500 instances of netperf TCP_RR test with 1 byte req. and resp.
> Results show cumulative transaction rate and system CPU utilization.
> 
> e1000e on 8 core Intel
>    Without RPS: 108K tps at 33% CPU
>    With RPS:    311K tps at 64% CPU
> 
> forcedeth on 16 core AMD
>    Without RPS: 156K tps at 15% CPU
>    With RPS:    404K tps at 49% CPU
>    
> bnx2x on 16 core AMD
>    Without RPS  567K tps at 61% CPU (4 HW RX queues)
>    Without RPS  738K tps at 96% CPU (8 HW RX queues)
>    With RPS:    854K tps at 76% CPU (4 HW RX queues)
> 
> Caveats:
> - The benefits of this patch are dependent on architecture and cache hierarchy.
> Tuning the masks to get best performance is probably necessary.
> - This patch adds overhead in the path for processing a single packet.  In
> a lightly loaded server this overhead may eliminate the advantages of
> increased parallelism, and possibly cause some relative performance degradation.
> We have found that masks that are cache aware (share same caches with
> the interrupting CPU) mitigate much of this.
> - The RPS masks can be changed dynamically, however whenever the mask is changed
> this introduces the possibility of generating out of order packets.  It's
> probably best not change the masks too frequently.
> 
> Signed-off-by: Tom Herbert <therbert@google.com>
> ---

Running on a preprod machine here, seems fine.

Some questions :

1) The need to add "rps_flow_entries=xxx" at boot time is problematic.
   Maybe we can allow it being dynamic (and use vmalloc() instead of
alloc_large_system_hash())

2) inet_rps_save_rxhash(sk, skb->rxhash);

	It should have a check to make sure some part of the stack doesnt feed
many different rxhash for a given socket (Make sure we dont pollute flow
table with pseudo random values)

3) UDP connected sockets dont benefit of RFS currently
   (Not sure many apps use connected UDP sockets, I do have some of them
in house)

I am trying following code for IPV4 only :



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Changli Gao April 6, 2010, 1:42 p.m. UTC | #1
On Tue, Apr 6, 2010 at 9:04 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> 1) The need to add "rps_flow_entries=xxx" at boot time is problematic.
>   Maybe we can allow it being dynamic (and use vmalloc() instead of
> alloc_large_system_hash())

Is flex_array better than vmalloc()?
Tom Herbert April 6, 2010, 2:25 p.m. UTC | #2
>
> Running on a preprod machine here, seems fine.
>
> Some questions :
>
> 1) The need to add "rps_flow_entries=xxx" at boot time is problematic.
>   Maybe we can allow it being dynamic (and use vmalloc() instead of
> alloc_large_system_hash())
>
Okay, could be a sysctl with vmalloc.

> 2) inet_rps_save_rxhash(sk, skb->rxhash);
>
>        It should have a check to make sure some part of the stack doesnt feed
> many different rxhash for a given socket (Make sure we dont pollute flow
> table with pseudo random values)
>
If packets for a connection are always received on the same device, is
it reasonable to assume the rxhash is constant for that connection?

I suppose it's possible that packets for a same sockets are being
constantly received on two different devices that are giving different
rxhashes.  This would already be bad in that OOO is probably happening
anyway.  I don't know if thrashing the sock_flow_table is going to
aggravate this scenario much.

Are there any other degenerative cases you're worried about?

> 3) UDP connected sockets dont benefit of RFS currently
>   (Not sure many apps use connected UDP sockets, I do have some of them
> in house)
>
Makes sense to support that.

> I am trying following code for IPV4 only :
>
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index 7af756d..5c2d37a 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -1216,6 +1216,7 @@ int udp_disconnect(struct sock *sk, int flags)
>        sk->sk_state = TCP_CLOSE;
>        inet->inet_daddr = 0;
>        inet->inet_dport = 0;
> +       inet_rps_save_rxhash(sk, 0);
>        sk->sk_bound_dev_if = 0;
>        if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
>                inet_reset_saddr(sk);
> @@ -1257,8 +1258,12 @@ EXPORT_SYMBOL(udp_lib_unhash);
>
>  static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
>  {
> -       int rc = sock_queue_rcv_skb(sk, skb);
> +       int rc;
> +
> +       if (inet_sk(sk)->inet_daddr)
> +               inet_rps_save_rxhash(sk, skb->rxhash);
>
> +       rc = sock_queue_rcv_skb(sk, skb);
>        if (rc < 0) {
>                int is_udplite = IS_UDPLITE(sk);
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet April 6, 2010, 3:10 p.m. UTC | #3
Le mardi 06 avril 2010 à 07:25 -0700, Tom Herbert a écrit :

> > 2) inet_rps_save_rxhash(sk, skb->rxhash);
> >
> >        It should have a check to make sure some part of the stack doesnt feed
> > many different rxhash for a given socket (Make sure we dont pollute flow
> > table with pseudo random values)
> >
> If packets for a connection are always received on the same device, is
> it reasonable to assume the rxhash is constant for that connection?
> 
> I suppose it's possible that packets for a same sockets are being
> constantly received on two different devices that are giving different
> rxhashes.  This would already be bad in that OOO is probably happening
> anyway.  I don't know if thrashing the sock_flow_table is going to
> aggravate this scenario much.
> 
> Are there any other degenerative cases you're worried about?

No, I was only considering this mostly as a debugging aid, before RPS is
stable, because mismatches could be unnoticed and performance not
optimal.
 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 7af756d..5c2d37a 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1216,6 +1216,7 @@  int udp_disconnect(struct sock *sk, int flags)
 	sk->sk_state = TCP_CLOSE;
 	inet->inet_daddr = 0;
 	inet->inet_dport = 0;
+	inet_rps_save_rxhash(sk, 0);
 	sk->sk_bound_dev_if = 0;
 	if (!(sk->sk_userlocks & SOCK_BINDADDR_LOCK))
 		inet_reset_saddr(sk);
@@ -1257,8 +1258,12 @@  EXPORT_SYMBOL(udp_lib_unhash);
 
 static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 {
-	int rc = sock_queue_rcv_skb(sk, skb);
+	int rc;
+
+	if (inet_sk(sk)->inet_daddr)
+		inet_rps_save_rxhash(sk, skb->rxhash);
 
+	rc = sock_queue_rcv_skb(sk, skb);
 	if (rc < 0) {
 		int is_udplite = IS_UDPLITE(sk);