Message ID | 1537248415-15534-1-git-send-email-lirongqing@baidu.com |
---|---|
State | Changes Requested, archived |
Delegated to: | David Miller |
Headers | show |
Series | [net-next] netlink: avoid to allocate full skb when sending to many devices | expand |
On 09/17/2018 10:26 PM, Li RongQing wrote: > if skb->head is vmalloc address, when this skb is delivered, full > allocation for this skb is required, if there are many devices, > the full allocation will be called for every devices > > now using the first time allocated skb when iterate other devices > to send, reduce full allocation and speedup deliver. > > Signed-off-by: Zhang Yu <zhangyu31@baidu.com> > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > net/netlink/af_netlink.c | 14 ++++++++------ > 1 file changed, 8 insertions(+), 6 deletions(-) > > This looks very broken to me. Only the original skb (given as an argument to __netlink_deliver_tap()) is guaranteed to not disappear while the loop is performed. (There is no skb_clone() after the first netlink_to_full_skb())
> On 09/17/2018 10:26 PM, Li RongQing wrote: > > if skb->head is vmalloc address, when this skb is delivered, full > > allocation for this skb is required, if there are many devices, the > > --- > > net/netlink/af_netlink.c | 14 ++++++++------ > > 1 file changed, 8 insertions(+), 6 deletions(-) > > > > > > This looks very broken to me. > > Only the original skb (given as an argument to __netlink_deliver_tap()) is > guaranteed to not disappear while the loop is performed. > > (There is no skb_clone() after the first netlink_to_full_skb()) > Thank you; I will rework it -RongQing
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index e3a0538ec0be..095b99e3c1fb 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -278,11 +278,11 @@ static bool netlink_filter_tap(const struct sk_buff *skb) return false; } -static int __netlink_deliver_tap_skb(struct sk_buff *skb, +static int __netlink_deliver_tap_skb(struct sk_buff **skb, struct net_device *dev) { struct sk_buff *nskb; - struct sock *sk = skb->sk; + struct sock *sk = (*skb)->sk; int ret = -ENOMEM; if (!net_eq(dev_net(dev), sock_net(sk))) @@ -290,10 +290,12 @@ static int __netlink_deliver_tap_skb(struct sk_buff *skb, dev_hold(dev); - if (is_vmalloc_addr(skb->head)) - nskb = netlink_to_full_skb(skb, GFP_ATOMIC); + if (is_vmalloc_addr((*skb)->head)) { + nskb = netlink_to_full_skb(*skb, GFP_ATOMIC); + *skb = nskb; + } else - nskb = skb_clone(skb, GFP_ATOMIC); + nskb = skb_clone(*skb, GFP_ATOMIC); if (nskb) { nskb->dev = dev; nskb->protocol = htons((u16) sk->sk_protocol); @@ -318,7 +320,7 @@ static void __netlink_deliver_tap(struct sk_buff *skb, struct netlink_tap_net *n return; list_for_each_entry_rcu(tmp, &nn->netlink_tap_all, list) { - ret = __netlink_deliver_tap_skb(skb, tmp->dev); + ret = __netlink_deliver_tap_skb(&skb, tmp->dev); if (unlikely(ret)) break; }