diff mbox

[RFC] iproute: Faster ip link add, set and delete

Message ID 20130327104746.0ec9dcb5@nehalam.linuxnetplumber.net
State RFC, archived
Delegated to: stephen hemminger
Headers show

Commit Message

Stephen Hemminger March 27, 2013, 5:47 p.m. UTC
If you need to do lots of operations the --batch mode will be significantly faster.
One command start and one link map.

I have an updated version of link map hash (index and name). Could you test this patch
which applies to latest version in git.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Eric W. Biederman March 28, 2013, 12:46 a.m. UTC | #1
Stephen Hemminger <stephen@networkplumber.org> writes:

> If you need to do lots of operations the --batch mode will be significantly faster.
> One command start and one link map.

The problem in this case as I understand it is lots of independent
operations. Now maybe lxc should not shell out to ip and perform the
work itself.

> I have an updated version of link map hash (index and name). Could you test this patch
> which applies to latest version in git.

This still dumps all of the interfaces in ll_init_map causing things to
slow down noticably.

# with your patch
# time ~/projects/iproute/iproute2/ip/ip link add a4511 type veth peer name b4511

real	0m0.049s
user	0m0.000s
sys	0m0.048s

# With a hack to make ll_map_init a nop.
# time ~/projects/iproute/iproute2/ip/ip link add a4512 type veth peer name b4512

real	0m0.003s
user	0m0.000s
sys	0m0.000s
eric-ThinkPad-X220 6bed4 # 

# Without any patches.
# time ~/projects/iproute/iproute2/ip/ip link add a5002 type veth peer name b5002

real	0m0.052s
user	0m0.004s
sys	0m0.044s

So it looks like dumping all of the interfaces is taking 46 miliseconds,
longer than otherwise.  Causing ip to take nearly an order of magnitude
longer to run when there are a lot of interfaces, and causing ip to slow
down with each command.

So the ideal situation is probably just to fill in the ll_map on demand
instead of up front.

Eric
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Serge E. Hallyn March 28, 2013, 3:20 a.m. UTC | #2
Quoting Eric W. Biederman (ebiederm@xmission.com):
> Stephen Hemminger <stephen@networkplumber.org> writes:
> 
> > If you need to do lots of operations the --batch mode will be significantly faster.
> > One command start and one link map.
> 
> The problem in this case as I understand it is lots of independent
> operations. Now maybe lxc should not shell out to ip and perform the
> work itself.

fwiw lxc uses netlink to create new veths, and picks random names with
mktemp() ahead of time.

-serge
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric W. Biederman March 28, 2013, 3:44 a.m. UTC | #3
Serge Hallyn <serge.hallyn@ubuntu.com> writes:

> Quoting Eric W. Biederman (ebiederm@xmission.com):
>> Stephen Hemminger <stephen@networkplumber.org> writes:
>> 
>> > If you need to do lots of operations the --batch mode will be significantly faster.
>> > One command start and one link map.
>> 
>> The problem in this case as I understand it is lots of independent
>> operations. Now maybe lxc should not shell out to ip and perform the
>> work itself.
>
> fwiw lxc uses netlink to create new veths, and picks random names with
> mktemp() ahead of time.

I am puzzled where does the slownes in iproute2 come into play?

Eric
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Serge E. Hallyn March 28, 2013, 4:28 a.m. UTC | #4
Quoting Eric W. Biederman (ebiederm@xmission.com):
> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
> 
> > Quoting Eric W. Biederman (ebiederm@xmission.com):
> >> Stephen Hemminger <stephen@networkplumber.org> writes:
> >> 
> >> > If you need to do lots of operations the --batch mode will be significantly faster.
> >> > One command start and one link map.
> >> 
> >> The problem in this case as I understand it is lots of independent
> >> operations. Now maybe lxc should not shell out to ip and perform the
> >> work itself.
> >
> > fwiw lxc uses netlink to create new veths, and picks random names with
> > mktemp() ahead of time.
> 
> I am puzzled where does the slownes in iproute2 come into play?

Benoit originally reported slowness when starting >1500 containers.  I
asked him to run a few manual tests to figure out what was taking the
time.  Manually creating a large # of veths was an obvious test, and
one which showed poorly scaling performance.

May well be there are other things slowing down lxc of course.

-serge
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric W. Biederman March 28, 2013, 5 a.m. UTC | #5
Serge Hallyn <serge.hallyn@ubuntu.com> writes:

> Quoting Eric W. Biederman (ebiederm@xmission.com):
>> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
>> 
>> > Quoting Eric W. Biederman (ebiederm@xmission.com):
>> >> Stephen Hemminger <stephen@networkplumber.org> writes:
>> >> 
>> >> > If you need to do lots of operations the --batch mode will be significantly faster.
>> >> > One command start and one link map.
>> >> 
>> >> The problem in this case as I understand it is lots of independent
>> >> operations. Now maybe lxc should not shell out to ip and perform the
>> >> work itself.
>> >
>> > fwiw lxc uses netlink to create new veths, and picks random names with
>> > mktemp() ahead of time.
>> 
>> I am puzzled where does the slownes in iproute2 come into play?
>
> Benoit originally reported slowness when starting >1500 containers.  I
> asked him to run a few manual tests to figure out what was taking the
> time.  Manually creating a large # of veths was an obvious test, and
> one which showed poorly scaling performance.

Apparently iproute is involved somehwere as when he tested with a
patched iproute (as you asked him to) the lxc startup slowdown was
gone.

> May well be there are other things slowing down lxc of course.

The evidence indicates it was iproute being called somewhere...


Eric
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Serge E. Hallyn March 28, 2013, 1:36 p.m. UTC | #6
Quoting Eric W. Biederman (ebiederm@xmission.com):
> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
> 
> > Quoting Eric W. Biederman (ebiederm@xmission.com):
> >> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
> >> 
> >> > Quoting Eric W. Biederman (ebiederm@xmission.com):
> >> >> Stephen Hemminger <stephen@networkplumber.org> writes:
> >> >> 
> >> >> > If you need to do lots of operations the --batch mode will be significantly faster.
> >> >> > One command start and one link map.
> >> >> 
> >> >> The problem in this case as I understand it is lots of independent
> >> >> operations. Now maybe lxc should not shell out to ip and perform the
> >> >> work itself.
> >> >
> >> > fwiw lxc uses netlink to create new veths, and picks random names with
> >> > mktemp() ahead of time.
> >> 
> >> I am puzzled where does the slownes in iproute2 come into play?
> >
> > Benoit originally reported slowness when starting >1500 containers.  I
> > asked him to run a few manual tests to figure out what was taking the
> > time.  Manually creating a large # of veths was an obvious test, and
> > one which showed poorly scaling performance.
> 
> Apparently iproute is involved somehwere as when he tested with a
> patched iproute (as you asked him to) the lxc startup slowdown was
> gone.
> 
> > May well be there are other things slowing down lxc of course.
> 
> The evidence indicates it was iproute being called somewhere...

Benoit can you tell us exactly what test you were running when you saw
the slowdown was gone?

-serge
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Benoit Lourdelet March 28, 2013, 1:42 p.m. UTC | #7
Hello,

My test consists in starting small containers (10MB of RAM ) each. Each
container has 2x physical VLAN interfaces attached.

lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = eth6.3
lxc.network.name = eth2
lxc.network.hwaddr = 00:50:56:a8:03:03
lxc.network.ipv4 = 192.168.1.1/24
lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = eth7.3
lxc.network.name = eth1
lxc.network.ipv4 = 2.2.2.2/24
lxc.network.hwaddr = 00:50:57:b8:00:01



With initial iproute2 , when I reach around 1600 containers, container
creation almost stops.It takes at least 20s per container to start.
With patched iproutes2 , I have started 4000 containers at a rate of 1 per
second w/o problem. I have 8000 clan interfaces configured on the host (2x
4000).


Regards

Benoit

On 28/03/2013 14:36, "Serge Hallyn" <serge.hallyn@ubuntu.com> wrote:

>Quoting Eric W. Biederman (ebiederm@xmission.com):
>> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
>> 
>> > Quoting Eric W. Biederman (ebiederm@xmission.com):
>> >> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
>> >> 
>> >> > Quoting Eric W. Biederman (ebiederm@xmission.com):
>> >> >> Stephen Hemminger <stephen@networkplumber.org> writes:
>> >> >> 
>> >> >> > If you need to do lots of operations the --batch mode will be
>>significantly faster.
>> >> >> > One command start and one link map.
>> >> >> 
>> >> >> The problem in this case as I understand it is lots of independent
>> >> >> operations. Now maybe lxc should not shell out to ip and perform
>>the
>> >> >> work itself.
>> >> >
>> >> > fwiw lxc uses netlink to create new veths, and picks random names
>>with
>> >> > mktemp() ahead of time.
>> >> 
>> >> I am puzzled where does the slownes in iproute2 come into play?
>> >
>> > Benoit originally reported slowness when starting >1500 containers.  I
>> > asked him to run a few manual tests to figure out what was taking the
>> > time.  Manually creating a large # of veths was an obvious test, and
>> > one which showed poorly scaling performance.
>> 
>> Apparently iproute is involved somehwere as when he tested with a
>> patched iproute (as you asked him to) the lxc startup slowdown was
>> gone.
>> 
>> > May well be there are other things slowing down lxc of course.
>> 
>> The evidence indicates it was iproute being called somewhere...
>
>Benoit can you tell us exactly what test you were running when you saw
>the slowdown was gone?
>
>-serge
>


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Serge E. Hallyn March 28, 2013, 3:04 p.m. UTC | #8
Quoting Benoit Lourdelet (blourdel@juniper.net):
> Hello,
> 
> My test consists in starting small containers (10MB of RAM ) each. Each
> container has 2x physical VLAN interfaces attached.

Which commands were you using to create/start them?

> lxc.network.type = phys
> lxc.network.flags = up
> lxc.network.link = eth6.3
> lxc.network.name = eth2
> lxc.network.hwaddr = 00:50:56:a8:03:03
> lxc.network.ipv4 = 192.168.1.1/24
> lxc.network.type = phys
> lxc.network.flags = up
> lxc.network.link = eth7.3
> lxc.network.name = eth1
> lxc.network.ipv4 = 2.2.2.2/24
> lxc.network.hwaddr = 00:50:57:b8:00:01
> 
> 
> 
> With initial iproute2 , when I reach around 1600 containers, container
> creation almost stops.It takes at least 20s per container to start.
> With patched iproutes2 , I have started 4000 containers at a rate of 1 per
> second w/o problem. I have 8000 clan interfaces configured on the host (2x
> 4000).
> 
> 
> Regards
> 
> Benoit
> 
> On 28/03/2013 14:36, "Serge Hallyn" <serge.hallyn@ubuntu.com> wrote:
> 
> >Quoting Eric W. Biederman (ebiederm@xmission.com):
> >> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
> >> 
> >> > Quoting Eric W. Biederman (ebiederm@xmission.com):
> >> >> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
> >> >> 
> >> >> > Quoting Eric W. Biederman (ebiederm@xmission.com):
> >> >> >> Stephen Hemminger <stephen@networkplumber.org> writes:
> >> >> >> 
> >> >> >> > If you need to do lots of operations the --batch mode will be
> >>significantly faster.
> >> >> >> > One command start and one link map.
> >> >> >> 
> >> >> >> The problem in this case as I understand it is lots of independent
> >> >> >> operations. Now maybe lxc should not shell out to ip and perform
> >>the
> >> >> >> work itself.
> >> >> >
> >> >> > fwiw lxc uses netlink to create new veths, and picks random names
> >>with
> >> >> > mktemp() ahead of time.
> >> >> 
> >> >> I am puzzled where does the slownes in iproute2 come into play?
> >> >
> >> > Benoit originally reported slowness when starting >1500 containers.  I
> >> > asked him to run a few manual tests to figure out what was taking the
> >> > time.  Manually creating a large # of veths was an obvious test, and
> >> > one which showed poorly scaling performance.
> >> 
> >> Apparently iproute is involved somehwere as when he tested with a
> >> patched iproute (as you asked him to) the lxc startup slowdown was
> >> gone.
> >> 
> >> > May well be there are other things slowing down lxc of course.
> >> 
> >> The evidence indicates it was iproute being called somewhere...
> >
> >Benoit can you tell us exactly what test you were running when you saw
> >the slowdown was gone?
> >
> >-serge
> >
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Benoit Lourdelet March 28, 2013, 3:21 p.m. UTC | #9
I use, for each container :

lxc-start -n lwb2001 -f /var/lib/lxc/lwb2001/config -d

I created the containers with lxc-ubuntu -n lwb2001

Benoit

On 28/03/2013 16:04, "Serge Hallyn" <serge.hallyn@ubuntu.com> wrote:

>Quoting Benoit Lourdelet (blourdel@juniper.net):
>> Hello,
>> 
>> My test consists in starting small containers (10MB of RAM ) each. Each
>> container has 2x physical VLAN interfaces attached.
>
>Which commands were you using to create/start them?
>
>> lxc.network.type = phys
>> lxc.network.flags = up
>> lxc.network.link = eth6.3
>> lxc.network.name = eth2
>> lxc.network.hwaddr = 00:50:56:a8:03:03
>> lxc.network.ipv4 = 192.168.1.1/24
>> lxc.network.type = phys
>> lxc.network.flags = up
>> lxc.network.link = eth7.3
>> lxc.network.name = eth1
>> lxc.network.ipv4 = 2.2.2.2/24
>> lxc.network.hwaddr = 00:50:57:b8:00:01
>> 
>> 
>> 
>> With initial iproute2 , when I reach around 1600 containers, container
>> creation almost stops.It takes at least 20s per container to start.
>> With patched iproutes2 , I have started 4000 containers at a rate of 1
>>per
>> second w/o problem. I have 8000 clan interfaces configured on the host
>>(2x
>> 4000).
>> 
>> 
>> Regards
>> 
>> Benoit
>> 
>> On 28/03/2013 14:36, "Serge Hallyn" <serge.hallyn@ubuntu.com> wrote:
>> 
>> >Quoting Eric W. Biederman (ebiederm@xmission.com):
>> >> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
>> >> 
>> >> > Quoting Eric W. Biederman (ebiederm@xmission.com):
>> >> >> Serge Hallyn <serge.hallyn@ubuntu.com> writes:
>> >> >> 
>> >> >> > Quoting Eric W. Biederman (ebiederm@xmission.com):
>> >> >> >> Stephen Hemminger <stephen@networkplumber.org> writes:
>> >> >> >> 
>> >> >> >> > If you need to do lots of operations the --batch mode will be
>> >>significantly faster.
>> >> >> >> > One command start and one link map.
>> >> >> >> 
>> >> >> >> The problem in this case as I understand it is lots of
>>independent
>> >> >> >> operations. Now maybe lxc should not shell out to ip and
>>perform
>> >>the
>> >> >> >> work itself.
>> >> >> >
>> >> >> > fwiw lxc uses netlink to create new veths, and picks random
>>names
>> >>with
>> >> >> > mktemp() ahead of time.
>> >> >> 
>> >> >> I am puzzled where does the slownes in iproute2 come into play?
>> >> >
>> >> > Benoit originally reported slowness when starting >1500
>>containers.  I
>> >> > asked him to run a few manual tests to figure out what was taking
>>the
>> >> > time.  Manually creating a large # of veths was an obvious test,
>>and
>> >> > one which showed poorly scaling performance.
>> >> 
>> >> Apparently iproute is involved somehwere as when he tested with a
>> >> patched iproute (as you asked him to) the lxc startup slowdown was
>> >> gone.
>> >> 
>> >> > May well be there are other things slowing down lxc of course.
>> >> 
>> >> The evidence indicates it was iproute being called somewhere...
>> >
>> >Benoit can you tell us exactly what test you were running when you saw
>> >the slowdown was gone?
>> >
>> >-serge
>> >
>> 
>> 
>


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/lib/ll_map.c b/lib/ll_map.c
index e9ae129..bf5b0bc 100644
--- a/lib/ll_map.c
+++ b/lib/ll_map.c
@@ -12,6 +12,7 @@ 
 
 #include <stdio.h>
 #include <stdlib.h>
+#include <stddef.h>
 #include <unistd.h>
 #include <syslog.h>
 #include <fcntl.h>
@@ -23,9 +24,44 @@ 
 #include "libnetlink.h"
 #include "ll_map.h"
 
-struct ll_cache
+
+struct hlist_head {
+	struct hlist_node *first;
+};
+
+struct hlist_node {
+	struct hlist_node *next, **pprev;
+};
+
+static inline void hlist_del(struct hlist_node *n)
+{
+	struct hlist_node *next = n->next;
+	struct hlist_node **pprev = n->pprev;
+	*pprev = next;
+	if (next)
+		next->pprev = pprev;
+}
+
+static inline void hlist_add_head(struct hlist_node *n, struct hlist_head *h)
 {
-	struct ll_cache   *idx_next;
+	struct hlist_node *first = h->first;
+	n->next = first;
+	if (first)
+		first->pprev = &n->next;
+	h->first = n;
+	n->pprev = &h->first;
+}
+
+#define hlist_for_each(pos, head) \
+	for (pos = (head)->first; pos ; pos = pos->next)
+
+#define container_of(ptr, type, member) ({			\
+	const typeof( ((type *)0)->member ) *__mptr = (ptr);	\
+	(type *)( (char *)__mptr - offsetof(type,member) );})
+
+struct ll_cache {
+	struct hlist_node idx_hash;
+	struct hlist_node name_hash;
 	unsigned	flags;
 	int		index;
 	unsigned short	type;
@@ -33,49 +69,107 @@  struct ll_cache
 };
 
 #define IDXMAP_SIZE	1024
-static struct ll_cache *idx_head[IDXMAP_SIZE];
+static struct hlist_head idx_head[IDXMAP_SIZE];
+static struct hlist_head name_head[IDXMAP_SIZE];
 
-static inline struct ll_cache *idxhead(int idx)
+static struct ll_cache *ll_get_by_index(unsigned index)
 {
-	return idx_head[idx & (IDXMAP_SIZE - 1)];
+	struct hlist_node *n;
+	unsigned h = index & (IDXMAP_SIZE - 1);
+
+	hlist_for_each(n, &idx_head[h]) {
+		struct ll_cache *im
+			= container_of(n, struct ll_cache, idx_hash);
+		if (im->index == index)
+			return im;
+	}
+
+	return NULL;
+}
+
+static unsigned namehash(const char *str)
+{
+	unsigned hash = 5381;
+
+	while (*str)
+		hash = ((hash << 5) + hash) + *str++; /* hash * 33 + c */
+
+	return hash;
+}
+
+static struct ll_cache *ll_get_by_name(const char *name)
+{
+	struct hlist_node *n;
+	unsigned h = namehash(name) & (IDXMAP_SIZE - 1);
+
+	hlist_for_each(n, &name_head[h]) {
+		struct ll_cache *im
+			= container_of(n, struct ll_cache, name_hash);
+
+		if (strncmp(im->name, name, IFNAMSIZ) == 0)
+			return im;
+	}
+
+	return NULL;
 }
 
 int ll_remember_index(const struct sockaddr_nl *who,
 		      struct nlmsghdr *n, void *arg)
 {
-	int h;
+	unsigned int h;
+	const char *ifname;
 	struct ifinfomsg *ifi = NLMSG_DATA(n);
-	struct ll_cache *im, **imp;
+	struct ll_cache *im;
 	struct rtattr *tb[IFLA_MAX+1];
 
-	if (n->nlmsg_type != RTM_NEWLINK)
+	if (n->nlmsg_type != RTM_NEWLINK && n->nlmsg_type != RTM_DELLINK)
 		return 0;
 
 	if (n->nlmsg_len < NLMSG_LENGTH(sizeof(ifi)))
 		return -1;
 
+	im = ll_get_by_index(ifi->ifi_index);
+	if (n->nlmsg_type == RTM_DELLINK) {
+		if (im) {
+			hlist_del(&im->name_hash);
+			hlist_del(&im->idx_hash);
+			free(im);
+		}
+		return 0;
+	}
+
 	memset(tb, 0, sizeof(tb));
 	parse_rtattr(tb, IFLA_MAX, IFLA_RTA(ifi), IFLA_PAYLOAD(n));
-	if (tb[IFLA_IFNAME] == NULL)
+	ifname = rta_getattr_str(tb[IFLA_IFNAME]);
+	if (ifname == NULL)
 		return 0;
 
-	h = ifi->ifi_index & (IDXMAP_SIZE - 1);
-	for (imp = &idx_head[h]; (im=*imp)!=NULL; imp = &im->idx_next)
-		if (im->index == ifi->ifi_index)
-			break;
-
-	if (im == NULL) {
-		im = malloc(sizeof(*im));
-		if (im == NULL)
-			return 0;
-		im->idx_next = *imp;
-		im->index = ifi->ifi_index;
-		*imp = im;
+	if (im) {
+		/* change to existing entry */
+		if (strcmp(im->name, ifname) != 0) {
+			hlist_del(&im->name_hash);
+			h = namehash(ifname) & (IDXMAP_SIZE - 1);
+			hlist_add_head(&im->name_hash, &name_head[h]);
+		}
+
+		im->flags = ifi->ifi_flags;
+		return 0;
 	}
 
+	im = malloc(sizeof(*im));
+	if (im == NULL)
+		return 0;
+	im->index = ifi->ifi_index;
+	strcpy(im->name, ifname);
 	im->type = ifi->ifi_type;
 	im->flags = ifi->ifi_flags;
-	strcpy(im->name, RTA_DATA(tb[IFLA_IFNAME]));
+
+	h = ifi->ifi_index & (IDXMAP_SIZE - 1);
+	hlist_add_head(&im->idx_hash, &idx_head[h]);
+
+	h = namehash(ifname) & (IDXMAP_SIZE - 1);
+	hlist_add_head(&im->name_hash, &name_head[h]);
+
 	return 0;
 }
 
@@ -86,15 +180,14 @@  const char *ll_idx_n2a(unsigned idx, char *buf)
 	if (idx == 0)
 		return "*";
 
-	for (im = idxhead(idx); im; im = im->idx_next)
-		if (im->index == idx)
-			return im->name;
+	im = ll_get_by_index(idx);
+	if (im)
+		return im->name;
 
 	snprintf(buf, IFNAMSIZ, "if%d", idx);
 	return buf;
 }
 
-
 const char *ll_index_to_name(unsigned idx)
 {
 	static char nbuf[IFNAMSIZ];
@@ -108,10 +201,9 @@  int ll_index_to_type(unsigned idx)
 
 	if (idx == 0)
 		return -1;
-	for (im = idxhead(idx); im; im = im->idx_next)
-		if (im->index == idx)
-			return im->type;
-	return -1;
+
+	im = ll_get_by_index(idx);
+	return im ? im->type : -1;
 }
 
 unsigned ll_index_to_flags(unsigned idx)
@@ -121,35 +213,21 @@  unsigned ll_index_to_flags(unsigned idx)
 	if (idx == 0)
 		return 0;
 
-	for (im = idxhead(idx); im; im = im->idx_next)
-		if (im->index == idx)
-			return im->flags;
-	return 0;
+	im = ll_get_by_index(idx);
+	return im ? im->flags : -1;
 }
 
 unsigned ll_name_to_index(const char *name)
 {
-	static char ncache[IFNAMSIZ];
-	static int icache;
-	struct ll_cache *im;
-	int i;
+	const struct ll_cache *im;
 	unsigned idx;
 
 	if (name == NULL)
 		return 0;
 
-	if (icache && strcmp(name, ncache) == 0)
-		return icache;
-
-	for (i=0; i<IDXMAP_SIZE; i++) {
-		for (im = idx_head[i]; im; im = im->idx_next) {
-			if (strcmp(im->name, name) == 0) {
-				icache = im->index;
-				strcpy(ncache, name);
-				return im->index;
-			}
-		}
-	}
+	im = ll_get_by_name(name);
+	if (im)
+		return im->index;
 
 	idx = if_nametoindex(name);
 	if (idx == 0)