Message ID | 20100422.001625.200862474.davem@davemloft.net |
---|---|
State | RFC, archived |
Delegated to: | David Miller |
Headers | show |
Le jeudi 22 avril 2010 à 00:16 -0700, David Miller a écrit : > From: Eric Dumazet <eric.dumazet@gmail.com> > Date: Thu, 22 Apr 2010 09:10:33 +0200 > > > Le mercredi 21 avril 2010 à 22:56 -0700, David Miller a écrit : > > > >> Right, I've applied this, thanks. > >> > >> What we should probably do instead is call and NULL out the > >> DEV_GSO_CB() destructor. Right? > > > > Yes, probably, I'll take a look at this if you want. > > It might look something like this: > > diff --git a/net/core/dev.c b/net/core/dev.c > index 9bf1ccc..13241da 100644 > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -1892,6 +1892,20 @@ static inline void skb_orphan_try(struct sk_buff *skb) > skb_orphan(skb); > } > > +/* > + * GSO packets need to be handled specially because such packets > + * hold the normal SKB destructor in a backup pointer. > + */ > +static inline void skb_orphan_try_gso(struct sk_buff *skb) > +{ > + if (!skb_tx(skb)->flags) { > + if (DEV_GSO_CB(skb)->destructor) > + DEV_GSO_CB(skb)->destructor(skb); > + DEV_GSO_CB(skb)->destructor = NULL; > + skb->sk = NULL; > + } > +} > + > int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev, > struct netdev_queue *txq) > { > @@ -1937,6 +1951,7 @@ gso: > if (dev->priv_flags & IFF_XMIT_DST_RELEASE) > skb_dst_drop(nskb); > > + skb_orphan_try_gso(skb); > rc = ops->ndo_start_xmit(nskb, dev); > if (unlikely(rc != NETDEV_TX_OK)) { > if (rc & ~NETDEV_TX_MASK) Hmm... are you sure we want to call destructor for each skb ? Should'nt we do it before initial skb is split ? -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/net/core/dev.c b/net/core/dev.c index 9bf1ccc..13241da 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1892,6 +1892,20 @@ static inline void skb_orphan_try(struct sk_buff *skb) skb_orphan(skb); } +/* + * GSO packets need to be handled specially because such packets + * hold the normal SKB destructor in a backup pointer. + */ +static inline void skb_orphan_try_gso(struct sk_buff *skb) +{ + if (!skb_tx(skb)->flags) { + if (DEV_GSO_CB(skb)->destructor) + DEV_GSO_CB(skb)->destructor(skb); + DEV_GSO_CB(skb)->destructor = NULL; + skb->sk = NULL; + } +} + int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev, struct netdev_queue *txq) { @@ -1937,6 +1951,7 @@ gso: if (dev->priv_flags & IFF_XMIT_DST_RELEASE) skb_dst_drop(nskb); + skb_orphan_try_gso(skb); rc = ops->ndo_start_xmit(nskb, dev); if (unlikely(rc != NETDEV_TX_OK)) { if (rc & ~NETDEV_TX_MASK)