diff mbox

[RFC] IPv6: Avoid taking write lock for /proc/net/ipv6_route

Message ID CAKA=qzbZ4x5u1bBM0gAECeMNutyaVzp1FmRPwKHsBEEs7984ag@mail.gmail.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Josh Hunt Dec. 28, 2011, 11:23 p.m. UTC
During some debugging I needed to look into how /proc/net/ipv6_route
operated and in my digging I found its calling fib6_clean_all() which
uses "write_lock_bh(&table->tb6_lock)" before doing the walk of the
table. I saw this on 2.6.32, but reading the code I believe the same
basic idea exists in the current code. Looking at the rtnetlink code
they are only calling "read_lock_bh(&table->tb6_lock);" via
fib6_dump_table(). While I realize reading from proc probably isn't
the recommended way of fetching the ipv6 route table; taking a write
lock seems unnecessary and would probably cause network performance
issues.

To verify this I loaded up the ipv6 route table and then ran iperf in 3 cases:
  * doing nothing
  * reading ipv6 route table via proc (while :; do cat
/proc/net/ipv6_route > /dev/null; done)
  * reading ipv6 route table via rtnetlink - (while :; do ip -6 route
show table all > /dev/null; done)

* Load the ipv6 route table up with:
  * for ((i = 0;i < 4000;i++)); do ip route add unreachable 2000::$i; done

* iperf commands:
  * client: iperf -i 1 -V -c <ipv6 addr>
  * server: iperf -V -s

* iperf results - 3 runs as client, 3 runs as server (in Mbits/sec)
  * nothing: client: 927,927,927 server: 927,927,927
  * proc: client: 179,97,96,113 server: 142,112,133
  * iproute: client: 928,927,928 server: 927,927,927

lock_stat shows taking the write lock is causing the slowdown. Using
this info I decided to write a version of fib6_clean_all() which
replaces write_lock_bh(&table->tb6_lock) with
read_lock_bh(&table->tb6_lock). With this new function I see the same
results as with my rtnetlink iperf test. I guess my question is what
am I missing? Is there a reason you need to take the write lock when
reading the route table to display to proc?

I attached a patch with my crude method listed above.

Thanks
Josh

Comments

David Miller Dec. 30, 2011, 10:09 p.m. UTC | #1
From: Josh Hunt <joshhunt00@gmail.com>
Date: Wed, 28 Dec 2011 17:23:07 -0600

> lock_stat shows taking the write lock is causing the slowdown. Using
> this info I decided to write a version of fib6_clean_all() which
> replaces write_lock_bh(&table->tb6_lock) with
> read_lock_bh(&table->tb6_lock). With this new function I see the same
> results as with my rtnetlink iperf test. I guess my question is what
> am I missing? Is there a reason you need to take the write lock when
> reading the route table to display to proc?

You're not missing anything, it's just an oversight or laziness. :-)

I've applied your patch thanks.

Longer term we should make the ipv6 tree traversals RCU safe just
like net/ipv4/fib_trie.c is.  Then we can do away with even the
read locks for read-only traversals.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Josh Hunt Dec. 31, 2011, 7:49 p.m. UTC | #2
On Fri, Dec 30, 2011 at 4:09 PM, David Miller <davem@davemloft.net> wrote:
> From: Josh Hunt <joshhunt00@gmail.com>
> Date: Wed, 28 Dec 2011 17:23:07 -0600
>
>> lock_stat shows taking the write lock is causing the slowdown. Using
>> this info I decided to write a version of fib6_clean_all() which
>> replaces write_lock_bh(&table->tb6_lock) with
>> read_lock_bh(&table->tb6_lock). With this new function I see the same
>> results as with my rtnetlink iperf test. I guess my question is what
>> am I missing? Is there a reason you need to take the write lock when
>> reading the route table to display to proc?
>
> You're not missing anything, it's just an oversight or laziness. :-)
>
> I've applied your patch thanks.
>
> Longer term we should make the ipv6 tree traversals RCU safe just
> like net/ipv4/fib_trie.c is.  Then we can do away with even the
> read locks for read-only traversals.
>

Thanks David. Are you aware if anyone has started the work to make
IPv6 traversals RCU safe?
David Miller Dec. 31, 2011, 10:46 p.m. UTC | #3
From: Josh Hunt <joshhunt00@gmail.com>
Date: Sat, 31 Dec 2011 13:49:38 -0600

> Thanks David. Are you aware if anyone has started the work to make
> IPv6 traversals RCU safe?

Nobody is working on this.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

commit 606e34e9301aab5e73f47f24b21a61f813398367
Author: Josh Hunt <joshhunt00@gmail.com>
Date:   Wed Dec 28 14:54:44 2011 -0800

    IPv6: Avoid taking write lock for /proc/net/ipv6_route

During some debugging I needed to look into how /proc/net/ipv6_route
operated and in my digging I found its calling fib6_clean_all() which uses
"write_lock_bh(&table->tb6_lock)" before doing the walk of the table. I
found this on 2.6.32, but reading the code I believe the same basic idea
exists currently. Looking at the rtnetlink code they are only calling
"read_lock_bh(&table->tb6_lock);" via fib6_dump_table(). While I realize
reading from proc isn't the recommended way of fetching the ipv6 route
table; taking a write lock seems unnecessary and would probably cause
network performance issues.

To verify this I loaded up the ipv6 route table and then ran iperf in 3
cases:
  * doing nothing
  * reading ipv6 route table via proc
    (while :; do cat /proc/net/ipv6_route > /dev/null; done)
  * reading ipv6 route table via rtnetlink
    (while :; do ip -6 route show table all > /dev/null; done)

* Load the ipv6 route table up with:
  * for ((i = 0;i < 4000;i++)); do ip route add unreachable 2000::$i; done

* iperf commands:
  * client: iperf -i 1 -V -c <ipv6 addr>
  * server: iperf -V -s

* iperf results - 3 runs each (in Mbits/sec)
  * nothing: client: 927,927,927 server: 927,927,927
  * proc: client: 179,97,96,113 server: 142,112,133
  * iproute: client: 928,927,928 server: 927,927,927

lock_stat shows taking the write lock is causing the slowdown. Using this
info I decided to write a version of fib6_clean_all() which replaces
write_lock_bh(&table->tb6_lock) with read_lock_bh(&table->tb6_lock). With
this new function I see the same results as with my rtnetlink iperf test.

Signed-off-by: Josh Hunt <joshhunt00@gmail.com>
---

diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
index 5735a0f..1d242bb 100644
--- a/include/net/ip6_fib.h
+++ b/include/net/ip6_fib.h
@@ -202,6 +202,10 @@  struct fib6_node		*fib6_locate(struct fib6_node *root,
 					     const struct in6_addr *daddr, int dst_len,
 					     const struct in6_addr *saddr, int src_len);
 
+extern void			fib6_clean_all_ro(struct net *net,
+					       int (*func)(struct rt6_info *, void *arg),
+					       int prune, void *arg);
+
 extern void			fib6_clean_all(struct net *net,
 					       int (*func)(struct rt6_info *, void *arg),
 					       int prune, void *arg);
diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
index 93718f3..3924b67 100644
--- a/net/ipv6/ip6_fib.c
+++ b/net/ipv6/ip6_fib.c
@@ -1380,6 +1380,26 @@  static void fib6_clean_tree(struct net *net, struct fib6_node *root,
 	fib6_walk(&c.w);
 }
 
+void fib6_clean_all_ro(struct net *net, int (*func)(struct rt6_info *, void *arg),
+		    int prune, void *arg)
+{
+	struct fib6_table *table;
+	struct hlist_node *node;
+	struct hlist_head *head;
+	unsigned int h;
+
+	rcu_read_lock();
+	for (h = 0; h < FIB6_TABLE_HASHSZ; h++) {
+		head = &net->ipv6.fib_table_hash[h];
+		hlist_for_each_entry_rcu(table, node, head, tb6_hlist) {
+			read_lock_bh(&table->tb6_lock);
+			fib6_clean_tree(net, &table->tb6_root,
+					func, prune, arg);
+			read_unlock_bh(&table->tb6_lock);
+		}
+	}
+	rcu_read_unlock();
+}
 void fib6_clean_all(struct net *net, int (*func)(struct rt6_info *, void *arg),
 		    int prune, void *arg)
 {
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 3399dd3..20f8f50 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -2659,7 +2659,7 @@  static int rt6_info_route(struct rt6_info *rt, void *p_arg)
 static int ipv6_route_show(struct seq_file *m, void *v)
 {
 	struct net *net = (struct net *)m->private;
-	fib6_clean_all(net, rt6_info_route, 0, m);
+	fib6_clean_all_ro(net, rt6_info_route, 0, m);
 	return 0;
 }