diff mbox

about latencies

Message ID 49F0F49A.1050609@cosmosbay.com
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Eric Dumazet April 23, 2009, 11:07 p.m. UTC
Brandeburg, Jesse a écrit :
> On Thu, 23 Apr 2009, Eric Dumazet wrote:
>> Some time later, NIC tells us TX was completed.
>> We free skb().
>>  1) dst_release()   (might dirty one cache line, that was increased by application cpu)
>>
>>  2) and more important... since UDP is now doing memory accounting...
>>
>> sock_wfree()
>>   -> sock_def_write_space()
>>      -> _read_lock()
>>      -> __wake_up_sync_key()
>>   and lot of functions calls to wakeup the task, for nothing since it
>> will just schedule again. Lot of cache lines dirtied...
>>
>>
>> We could improve this.
>>
>> 1) dst_release at xmit time, should save a cache line ping-pong on general case
>> 2) sock_wfree() in advance, done at transmit time (generally the thread/cpu doing the send)
> 
> how much does the effect socket accounting?  will the app then fill the 
> hardware tx ring all the time because there is no application throttling 
> due to delayed kfree?

tx ring is limited to 256 or 512 or 1024 elements, but yes this might
defeat udp mem accounting on sending side, unless using qdiscs...

Alternative would be to seperate sleepers (waiting for input or output)
to avoid extra wakeups. I am pretty sure every network dev always wanted
to do that eventually :)

> 
>> 3) changing bnx2_poll_work() to first call bnx2_rx_int(), then bnx2_tx_int() to consume tx.
> 
> at least all of the intel drivers that have a single vector (function) 
> handling interrupts, always call tx clean first so that any tx buffers are 
> free to be used immediately because the NAPI calls can generate tx traffic 
> (acks in the case of tcp and full routed packet transmits in the case of 
> forwarding)
> 
> of course in the case of MSI-X (igb/ixgbe) most times the tx cleanup is 
> handled independently (completely async) of rx.
> 
> 
>> What do you think ?
> 
> you're running a latency sensitive test on a NOHZ kernel below, isn't that 
> a bad idea?

I tried worst case to match (eventually) Christoph data.
I usually am not using NOHZ, but what about linux distros ?

> 
> OT - the amount of timer code (*ns*) and spinlocks noted below seems 
> generally disturbing.
> 
>> function ftrace of one "tx completion, extra wakeup, incoming udp, outgoing udp"
> 
> thanks for posting this, very interesting to see the flow of calls.  A ton 
> of work is done to handle just two packets.

yes, it costs about 30000 cycles...

>  
> might also be interesting to see what happens (how much shorter the call 
> chain is) on a UP kernel.

Here is a preliminary patch that does this, not for inclusion, testing only and
comments welcomed.

It saves more than 2 us on preliminary tests (! NOHZ kernel)
and CPU0 handling both IRQ and application.

# udpping -n 10000 -l 40 192.168.20.110
udp ping 0.0.0.0:9001 -> 192.168.20.110:9000 10000 samples ....
742759.61us (70.86us/74.28us/480.32us)

BTW, UDP mem accounting was added in 2.6.25.

[RFC] bnx2: Optimizations

1) dst_release() at xmit time, should save a cache line ping-pong on general case,
   where TX completion is done by another cpu.

2) sock_wfree() in advance, done at transmit time (generally the thread/cpu doing the send),
   instead doing it at completion time, by another cpu.

This reduces latency of UDP receive/send by 2 us at least

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Miller April 24, 2009, 12:04 a.m. UTC | #1
From: Eric Dumazet <dada1@cosmosbay.com>
Date: Fri, 24 Apr 2009 01:07:06 +0200

> Brandeburg, Jesse a écrit :
>> On Thu, 23 Apr 2009, Eric Dumazet wrote:
>>> We could improve this.
>>>
>>> 1) dst_release at xmit time, should save a cache line ping-pong on general case
>>> 2) sock_wfree() in advance, done at transmit time (generally the thread/cpu doing the send)
>> 
>> how much does the effect socket accounting?  will the app then fill the 
>> hardware tx ring all the time because there is no application throttling 
>> due to delayed kfree?
> 
> tx ring is limited to 256 or 512 or 1024 elements, but yes this might
> defeat udp mem accounting on sending side, unless using qdiscs...

I'm pretty sure you really can't do this.  It's been suggested
countless times in the past.

The whole point of the socket send buffer limits is to eliminate
the situation where one socket essentially hogs the TX queue of
the device.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet April 24, 2009, 5:11 a.m. UTC | #2
David Miller a écrit :
> From: Eric Dumazet <dada1@cosmosbay.com>
> Date: Fri, 24 Apr 2009 01:07:06 +0200
> 
>> Brandeburg, Jesse a écrit :
>>> On Thu, 23 Apr 2009, Eric Dumazet wrote:
>>>> We could improve this.
>>>>
>>>> 1) dst_release at xmit time, should save a cache line ping-pong on general case
>>>> 2) sock_wfree() in advance, done at transmit time (generally the thread/cpu doing the send)
>>> how much does the effect socket accounting?  will the app then fill the 
>>> hardware tx ring all the time because there is no application throttling 
>>> due to delayed kfree?
>> tx ring is limited to 256 or 512 or 1024 elements, but yes this might
>> defeat udp mem accounting on sending side, unless using qdiscs...
> 
> I'm pretty sure you really can't do this.  It's been suggested
> countless times in the past.
> 
> The whole point of the socket send buffer limits is to eliminate
> the situation where one socket essentially hogs the TX queue of
> the device.


Yes agreed !

Without splitting sk_sleep and enlarging _again_ "struct sock",
cannot we make sock_def_write_space() smarter ? Avoiding scheduling
as the plague Your Honor :)

Dont we have a bit saying there is a sleeping writer ?

We dirty sk_callback_lock, and read "sk_wmem_alloc" and "sk_sndbuf",
we could first test a flag.



Actual function is : (not a patch, just as reference for convenience)

static void sock_def_write_space(struct sock *sk)
{
        read_lock(&sk->sk_callback_lock);

        /* Do not wake up a writer until he can make "significant"
         * progress.  --DaveM
         */
        if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf) {
                if (sk->sk_sleep && waitqueue_active(sk->sk_sleep))
                        wake_up_interruptible_sync_poll(sk->sk_sleep, POLLOUT |
                                                POLLWRNORM | POLLWRBAND);

                /* Should agree with poll, otherwise some programs break */
                if (sock_writeable(sk))
                        sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
        }

        read_unlock(&sk->sk_callback_lock);
}

Thank you

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Lameter (Ampere) April 24, 2009, 2 p.m. UTC | #3
On Fri, 24 Apr 2009, Eric Dumazet wrote:

> > you're running a latency sensitive test on a NOHZ kernel below, isn't that
> > a bad idea?
>
> I tried worst case to match (eventually) Christoph data.
> I usually am not using NOHZ, but what about linux distros ?

Why does NOHZ have an impact here? Timed wakeups can happen using
high res timers after all and a timer interrupt should be generated if the
network layer needs it.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet April 24, 2009, 2:18 p.m. UTC | #4
Christoph Lameter a écrit :
> On Fri, 24 Apr 2009, Eric Dumazet wrote:
> 
>>> you're running a latency sensitive test on a NOHZ kernel below, isn't that
>>> a bad idea?
>> I tried worst case to match (eventually) Christoph data.
>> I usually am not using NOHZ, but what about linux distros ?
> 
> Why does NOHZ have an impact here? Timed wakeups can happen using
> high res timers after all and a timer interrupt should be generated if the
> network layer needs it.

I am not able to answer why it has an impact, but it is not cheap.

Compare function trace I sent yesterday, compared to this one without NOHZ
This one is far less noisy...

          <idle>-0     [000] 27907.377687: finish_task_switch <-__schedule
          <idle>-0     [000] 27907.377688: mwait_idle <-cpu_idle
          <idle>-0     [000] 27907.377735: do_IRQ <-common_interrupt
          <idle>-0     [000] 27907.377736: irq_enter <-do_IRQ
          <idle>-0     [000] 27907.377736: idle_cpu <-irq_enter
          <idle>-0     [000] 27907.377736: tick_check_idle <-irq_enter
          <idle>-0     [000] 27907.377736: tick_check_oneshot_broadcast <-tick_check_idle
          <idle>-0     [000] 27907.377736: handle_irq <-do_IRQ
          <idle>-0     [000] 27907.377737: irq_to_desc <-handle_irq
          <idle>-0     [000] 27907.377737: handle_edge_irq <-handle_irq
          <idle>-0     [000] 27907.377737: _spin_lock <-handle_edge_irq
          <idle>-0     [000] 27907.377737: ack_apic_edge <-handle_edge_irq
          <idle>-0     [000] 27907.377737: irq_to_desc <-ack_apic_edge
          <idle>-0     [000] 27907.377738: irq_complete_move <-ack_apic_edge
          <idle>-0     [000] 27907.377738: move_native_irq <-ack_apic_edge
          <idle>-0     [000] 27907.377738: irq_to_desc <-move_native_irq
          <idle>-0     [000] 27907.377738: native_apic_mem_write <-ack_apic_edge
          <idle>-0     [000] 27907.377738: handle_IRQ_event <-handle_edge_irq
          <idle>-0     [000] 27907.377739: bnx2_msi <-handle_IRQ_event
          <idle>-0     [000] 27907.377739: __napi_schedule <-bnx2_msi
          <idle>-0     [000] 27907.377739: note_interrupt <-handle_edge_irq
          <idle>-0     [000] 27907.377739: _spin_lock <-handle_edge_irq
          <idle>-0     [000] 27907.377740: irq_exit <-do_IRQ
          <idle>-0     [000] 27907.377740: do_softirq <-irq_exit
          <idle>-0     [000] 27907.377740: __do_softirq <-do_softirq
          <idle>-0     [000] 27907.377740: net_rx_action <-__do_softirq
          <idle>-0     [000] 27907.377740: bnx2_poll <-net_rx_action
          <idle>-0     [000] 27907.377741: bnx2_poll_work <-bnx2_poll
          <idle>-0     [000] 27907.377741: __netdev_alloc_skb <-bnx2_poll_work
          <idle>-0     [000] 27907.377741: __alloc_skb <-__netdev_alloc_skb
          <idle>-0     [000] 27907.377741: kmem_cache_alloc <-__alloc_skb
          <idle>-0     [000] 27907.377742: __kmalloc_track_caller <-__alloc_skb
          <idle>-0     [000] 27907.377742: get_slab <-__kmalloc_track_caller
          <idle>-0     [000] 27907.377742: skb_put <-bnx2_poll_work
          <idle>-0     [000] 27907.377742: eth_type_trans <-bnx2_poll_work
          <idle>-0     [000] 27907.377743: skb_pull <-eth_type_trans
          <idle>-0     [000] 27907.377743: __vlan_hwaccel_rx <-bnx2_poll_work
          <idle>-0     [000] 27907.377743: netif_receive_skb <-__vlan_hwaccel_rx
          <idle>-0     [000] 27907.377743: vlan_hwaccel_do_receive <-netif_receive_skb
          <idle>-0     [000] 27907.377743: netif_nit_deliver <-vlan_hwaccel_do_receive
          <idle>-0     [000] 27907.377744: ip_rcv <-netif_receive_skb
          <idle>-0     [000] 27907.377744: nf_hook_slow <-ip_rcv
          <idle>-0     [000] 27907.377744: nf_iterate <-nf_hook_slow
          <idle>-0     [000] 27907.377744: ip_rcv_finish <-ip_rcv
          <idle>-0     [000] 27907.377745: ip_route_input <-ip_rcv_finish
          <idle>-0     [000] 27907.377745: ip_local_deliver <-ip_rcv_finish
          <idle>-0     [000] 27907.377745: nf_hook_slow <-ip_local_deliver
          <idle>-0     [000] 27907.377745: nf_iterate <-nf_hook_slow
          <idle>-0     [000] 27907.377745: ipt_local_in_hook <-nf_iterate
          <idle>-0     [000] 27907.377746: ipt_do_table <-ipt_local_in_hook
          <idle>-0     [000] 27907.377746: local_bh_disable <-ipt_do_table
          <idle>-0     [000] 27907.377746: _read_lock <-ipt_do_table
          <idle>-0     [000] 27907.377746: _read_unlock_bh <-ipt_do_table
          <idle>-0     [000] 27907.377747: local_bh_enable_ip <-_read_unlock_bh
          <idle>-0     [000] 27907.377747: ip_local_deliver_finish <-ip_local_deliver
          <idle>-0     [000] 27907.377747: raw_local_deliver <-ip_local_deliver_finish
          <idle>-0     [000] 27907.377747: udp_rcv <-ip_local_deliver_finish
          <idle>-0     [000] 27907.377748: __udp4_lib_rcv <-udp_rcv
          <idle>-0     [000] 27907.377748: __udp4_lib_lookup <-__udp4_lib_rcv
          <idle>-0     [000] 27907.377748: udp_queue_rcv_skb <-__udp4_lib_rcv
          <idle>-0     [000] 27907.377748: _spin_lock <-udp_queue_rcv_skb
          <idle>-0     [000] 27907.377748: __udp_queue_rcv_skb <-udp_queue_rcv_skb
          <idle>-0     [000] 27907.377749: sock_queue_rcv_skb <-__udp_queue_rcv_skb
          <idle>-0     [000] 27907.377749: sk_filter <-sock_queue_rcv_skb
          <idle>-0     [000] 27907.377749: local_bh_disable <-sk_filter
          <idle>-0     [000] 27907.377749: local_bh_enable <-sk_filter
          <idle>-0     [000] 27907.377749: skb_queue_tail <-sock_queue_rcv_skb
          <idle>-0     [000] 27907.377750: _spin_lock_irqsave <-skb_queue_tail
          <idle>-0     [000] 27907.377750: _spin_unlock_irqrestore <-skb_queue_tail
          <idle>-0     [000] 27907.377750: sock_def_readable <-sock_queue_rcv_skb
          <idle>-0     [000] 27907.377750: _read_lock <-sock_def_readable
          <idle>-0     [000] 27907.377750: __wake_up_sync_key <-sock_def_readable
          <idle>-0     [000] 27907.377751: _spin_lock_irqsave <-__wake_up_sync_key
          <idle>-0     [000] 27907.377751: __wake_up_common <-__wake_up_sync_key
          <idle>-0     [000] 27907.377751: autoremove_wake_function <-__wake_up_common
          <idle>-0     [000] 27907.377751: default_wake_function <-autoremove_wake_function
          <idle>-0     [000] 27907.377751: try_to_wake_up <-default_wake_function
          <idle>-0     [000] 27907.377752: task_rq_lock <-try_to_wake_up
          <idle>-0     [000] 27907.377752: _spin_lock <-task_rq_lock
          <idle>-0     [000] 27907.377752: select_task_rq_fair <-try_to_wake_up
          <idle>-0     [000] 27907.377752: activate_task <-try_to_wake_up
          <idle>-0     [000] 27907.377753: enqueue_task <-activate_task
          <idle>-0     [000] 27907.377753: enqueue_task_fair <-enqueue_task
          <idle>-0     [000] 27907.377753: update_curr <-enqueue_task_fair
          <idle>-0     [000] 27907.377753: place_entity <-enqueue_task_fair
          <idle>-0     [000] 27907.377753: __enqueue_entity <-enqueue_task_fair
          <idle>-0     [000] 27907.377754: hrtick_update <-enqueue_task_fair
          <idle>-0     [000] 27907.377754: check_preempt_curr_idle <-try_to_wake_up
          <idle>-0     [000] 27907.377754: resched_task <-check_preempt_curr_idle
          <idle>-0     [000] 27907.377754: _spin_unlock_irqrestore <-try_to_wake_up
          <idle>-0     [000] 27907.377755: _spin_unlock_irqrestore <-__wake_up_sync_key
          <idle>-0     [000] 27907.377756: napi_complete <-bnx2_poll
          <idle>-0     [000] 27907.377756: napi_gro_flush <-napi_complete
          <idle>-0     [000] 27907.377756: __napi_complete <-napi_complete
          <idle>-0     [000] 27907.377756: rcu_bh_qsctr_inc <-__do_softirq
          <idle>-0     [000] 27907.377757: _local_bh_enable <-__do_softirq
          <idle>-0     [000] 27907.377757: schedule <-cpu_idle
          <idle>-0     [000] 27907.377757: __schedule <-schedule
          <idle>-0     [000] 27907.377757: rcu_qsctr_inc <-__schedule
          <idle>-0     [000] 27907.377758: _spin_lock_irq <-__schedule
          <idle>-0     [000] 27907.377758: put_prev_task_idle <-__schedule
          <idle>-0     [000] 27907.377758: pick_next_task_fair <-__schedule
          <idle>-0     [000] 27907.377758: __dequeue_entity <-pick_next_task_fair
         udpping-12642 [000] 27907.377759: finish_task_switch <-__schedule
         udpping-12642 [000] 27907.377759: finish_wait <-__skb_recv_datagram
         udpping-12642 [000] 27907.377759: _spin_lock_irqsave <-__skb_recv_datagram
         udpping-12642 [000] 27907.377760: _spin_unlock_irqrestore <-__skb_recv_datagram
         udpping-12642 [000] 27907.377760: skb_copy_datagram_iovec <-udp_recvmsg
         udpping-12642 [000] 27907.377760: memcpy_toiovec <-skb_copy_datagram_iovec
         udpping-12642 [000] 27907.377760: copy_to_user <-memcpy_toiovec
         udpping-12642 [000] 27907.377761: lock_sock_nested <-udp_recvmsg
         udpping-12642 [000] 27907.377761: _spin_lock_bh <-lock_sock_nested
         udpping-12642 [000] 27907.377761: local_bh_disable <-_spin_lock_bh
         udpping-12642 [000] 27907.377761: local_bh_enable <-lock_sock_nested
         udpping-12642 [000] 27907.377761: skb_free_datagram <-udp_recvmsg
         udpping-12642 [000] 27907.377762: consume_skb <-skb_free_datagram
         udpping-12642 [000] 27907.377762: __kfree_skb <-consume_skb
         udpping-12642 [000] 27907.377762: skb_release_head_state <-__kfree_skb
         udpping-12642 [000] 27907.377762: dst_release <-skb_release_head_state
         udpping-12642 [000] 27907.377762: sock_rfree <-skb_release_head_state
         udpping-12642 [000] 27907.377763: skb_release_data <-__kfree_skb
         udpping-12642 [000] 27907.377763: kfree <-skb_release_data
         udpping-12642 [000] 27907.377763: kmem_cache_free <-__kfree_skb
         udpping-12642 [000] 27907.377763: release_sock <-udp_recvmsg
         udpping-12642 [000] 27907.377764: _spin_lock_bh <-release_sock
         udpping-12642 [000] 27907.377764: local_bh_disable <-_spin_lock_bh
         udpping-12642 [000] 27907.377764: _spin_unlock_bh <-release_sock
         udpping-12642 [000] 27907.377764: local_bh_enable_ip <-_spin_unlock_bh
         udpping-12642 [000] 27907.377764: move_addr_to_user <-sys_recvfrom
         udpping-12642 [000] 27907.377765: copy_to_user <-move_addr_to_user
         udpping-12642 [000] 27907.377765: sys_socketcall <-sysenter_do_call
         udpping-12642 [000] 27907.377765: copy_from_user <-sys_socketcall
         udpping-12642 [000] 27907.377766: sys_sendto <-sys_socketcall
         udpping-12642 [000] 27907.377766: sockfd_lookup_light <-sys_sendto
         udpping-12642 [000] 27907.377766: fget_light <-sockfd_lookup_light
         udpping-12642 [000] 27907.377766: move_addr_to_kernel <-sys_sendto
         udpping-12642 [000] 27907.377766: copy_from_user <-move_addr_to_kernel
         udpping-12642 [000] 27907.377767: sock_sendmsg <-sys_sendto
         udpping-12642 [000] 27907.377767: inet_sendmsg <-sock_sendmsg
         udpping-12642 [000] 27907.377767: udp_sendmsg <-inet_sendmsg
         udpping-12642 [000] 27907.377767: sock_tx_timestamp <-udp_sendmsg
         udpping-12642 [000] 27907.377768: ip_route_output_flow <-udp_sendmsg
         udpping-12642 [000] 27907.377768: __ip_route_output_key <-ip_route_output_flow
         udpping-12642 [000] 27907.377768: local_bh_disable <-__ip_route_output_key
         udpping-12642 [000] 27907.377768: local_bh_enable <-__ip_route_output_key
         udpping-12642 [000] 27907.377769: __xfrm_lookup <-ip_route_output_flow
         udpping-12642 [000] 27907.377769: lock_sock_nested <-udp_sendmsg
         udpping-12642 [000] 27907.377769: _spin_lock_bh <-lock_sock_nested
         udpping-12642 [000] 27907.377769: local_bh_disable <-_spin_lock_bh
         udpping-12642 [000] 27907.377769: local_bh_enable <-lock_sock_nested
         udpping-12642 [000] 27907.377770: ip_append_data <-udp_sendmsg
         udpping-12642 [000] 27907.377770: sock_alloc_send_skb <-ip_append_data
         udpping-12642 [000] 27907.377770: sock_alloc_send_pskb <-sock_alloc_send_skb
         udpping-12642 [000] 27907.377770: __alloc_skb <-sock_alloc_send_pskb
         udpping-12642 [000] 27907.377771: kmem_cache_alloc <-__alloc_skb
         udpping-12642 [000] 27907.377771: __kmalloc_track_caller <-__alloc_skb
         udpping-12642 [000] 27907.377771: get_slab <-__kmalloc_track_caller
         udpping-12642 [000] 27907.377771: skb_put <-ip_append_data
         udpping-12642 [000] 27907.377772: ip_generic_getfrag <-ip_append_data
         udpping-12642 [000] 27907.377772: csum_partial_copy_fromiovecend <-ip_generic_getfrag
         udpping-12642 [000] 27907.377772: udp_push_pending_frames <-udp_sendmsg
         udpping-12642 [000] 27907.377772: ip_push_pending_frames <-udp_push_pending_frames
         udpping-12642 [000] 27907.377773: ip_local_out <-ip_push_pending_frames
         udpping-12642 [000] 27907.377773: __ip_local_out <-ip_local_out
         udpping-12642 [000] 27907.377773: nf_hook_slow <-__ip_local_out
         udpping-12642 [000] 27907.377773: nf_iterate <-nf_hook_slow
         udpping-12642 [000] 27907.377773: ipt_local_out_hook <-nf_iterate
         udpping-12642 [000] 27907.377774: ipt_do_table <-ipt_local_out_hook
         udpping-12642 [000] 27907.377774: local_bh_disable <-ipt_do_table
         udpping-12642 [000] 27907.377774: _read_lock <-ipt_do_table
         udpping-12642 [000] 27907.377774: _read_unlock_bh <-ipt_do_table
         udpping-12642 [000] 27907.377775: local_bh_enable_ip <-_read_unlock_bh
         udpping-12642 [000] 27907.377775: ip_output <-ip_local_out
         udpping-12642 [000] 27907.377775: nf_hook_slow <-ip_output
         udpping-12642 [000] 27907.377775: nf_iterate <-nf_hook_slow
         udpping-12642 [000] 27907.377775: ip_finish_output <-ip_output
         udpping-12642 [000] 27907.377776: skb_push <-ip_finish_output
         udpping-12642 [000] 27907.377776: dev_queue_xmit <-ip_finish_output
         udpping-12642 [000] 27907.377776: local_bh_disable <-dev_queue_xmit
         udpping-12642 [000] 27907.377776: _spin_lock <-dev_queue_xmit
         udpping-12642 [000] 27907.377777: pfifo_fast_enqueue <-dev_queue_xmit
         udpping-12642 [000] 27907.377777: __qdisc_run <-dev_queue_xmit
         udpping-12642 [000] 27907.377777: pfifo_fast_dequeue <-__qdisc_run
         udpping-12642 [000] 27907.377777: _spin_lock <-__qdisc_run
         udpping-12642 [000] 27907.377777: dev_hard_start_xmit <-__qdisc_run
         udpping-12642 [000] 27907.377778: vlan_dev_hwaccel_hard_start_xmit <-dev_hard_start_xmit
         udpping-12642 [000] 27907.377778: dev_queue_xmit <-vlan_dev_hwaccel_hard_start_xmit
         udpping-12642 [000] 27907.377778: local_bh_disable <-dev_queue_xmit
         udpping-12642 [000] 27907.377778: dev_hard_start_xmit <-dev_queue_xmit
         udpping-12642 [000] 27907.377778: bond_start_xmit <-dev_hard_start_xmit
         udpping-12642 [000] 27907.377779: _read_lock <-bond_start_xmit
         udpping-12642 [000] 27907.377779: _read_lock <-bond_start_xmit
         udpping-12642 [000] 27907.377779: bond_dev_queue_xmit <-bond_start_xmit
         udpping-12642 [000] 27907.377779: dev_queue_xmit <-bond_dev_queue_xmit
         udpping-12642 [000] 27907.377780: local_bh_disable <-dev_queue_xmit
         udpping-12642 [000] 27907.377780: _spin_lock <-dev_queue_xmit
         udpping-12642 [000] 27907.377780: pfifo_fast_enqueue <-dev_queue_xmit
         udpping-12642 [000] 27907.377780: __qdisc_run <-dev_queue_xmit
         udpping-12642 [000] 27907.377781: pfifo_fast_dequeue <-__qdisc_run
         udpping-12642 [000] 27907.377781: _spin_lock <-__qdisc_run
         udpping-12642 [000] 27907.377781: dev_hard_start_xmit <-__qdisc_run
         udpping-12642 [000] 27907.377782: bnx2_start_xmit <-dev_hard_start_xmit
         udpping-12642 [000] 27907.377782: skb_dma_map <-bnx2_start_xmit
         udpping-12642 [000] 27907.377782: nommu_map_page <-skb_dma_map
         udpping-12642 [000] 27907.377782: dst_release <-bnx2_start_xmit
         udpping-12642 [000] 27907.377782: sock_wfree <-bnx2_start_xmit
         udpping-12642 [000] 27907.377783: sock_def_write_space <-sock_wfree
         udpping-12642 [000] 27907.377783: _read_lock <-sock_def_write_space
         udpping-12642 [000] 27907.377783: _spin_lock <-__qdisc_run
         udpping-12642 [000] 27907.377784: local_bh_enable <-dev_queue_xmit
         udpping-12642 [000] 27907.377784: local_bh_enable <-dev_queue_xmit
         udpping-12642 [000] 27907.377784: _spin_lock <-__qdisc_run
         udpping-12642 [000] 27907.377784: local_bh_enable <-dev_queue_xmit
         udpping-12642 [000] 27907.377785: ip_cork_release <-ip_push_pending_frames
         udpping-12642 [000] 27907.377785: kfree <-ip_cork_release
         udpping-12642 [000] 27907.377785: dst_release <-ip_cork_release
         udpping-12642 [000] 27907.377785: release_sock <-udp_sendmsg
         udpping-12642 [000] 27907.377786: _spin_lock_bh <-release_sock
         udpping-12642 [000] 27907.377786: local_bh_disable <-_spin_lock_bh
         udpping-12642 [000] 27907.377786: _spin_unlock_bh <-release_sock
         udpping-12642 [000] 27907.377786: local_bh_enable_ip <-_spin_unlock_bh
         udpping-12642 [000] 27907.377787: sys_socketcall <-sysenter_do_call
         udpping-12642 [000] 27907.377787: copy_from_user <-sys_socketcall
         udpping-12642 [000] 27907.377787: sys_recvfrom <-sys_socketcall
         udpping-12642 [000] 27907.377787: sockfd_lookup_light <-sys_recvfrom
         udpping-12642 [000] 27907.377788: fget_light <-sockfd_lookup_light
         udpping-12642 [000] 27907.377788: sock_recvmsg <-sys_recvfrom
         udpping-12642 [000] 27907.377788: sock_common_recvmsg <-sock_recvmsg
         udpping-12642 [000] 27907.377788: udp_recvmsg <-sock_common_recvmsg
         udpping-12642 [000] 27907.377789: __skb_recv_datagram <-udp_recvmsg
         udpping-12642 [000] 27907.377789: _spin_lock_irqsave <-__skb_recv_datagram
         udpping-12642 [000] 27907.377789: _spin_unlock_irqrestore <-__skb_recv_datagram
         udpping-12642 [000] 27907.377789: prepare_to_wait_exclusive <-__skb_recv_datagram
         udpping-12642 [000] 27907.377789: _spin_lock_irqsave <-prepare_to_wait_exclusive
         udpping-12642 [000] 27907.377790: _spin_unlock_irqrestore <-prepare_to_wait_exclusive
         udpping-12642 [000] 27907.377790: do_IRQ <-common_interrupt
         udpping-12642 [000] 27907.377790: irq_enter <-do_IRQ
         udpping-12642 [000] 27907.377790: idle_cpu <-irq_enter
         udpping-12642 [000] 27907.377791: handle_irq <-do_IRQ
         udpping-12642 [000] 27907.377791: irq_to_desc <-handle_irq
         udpping-12642 [000] 27907.377791: handle_edge_irq <-handle_irq
         udpping-12642 [000] 27907.377791: _spin_lock <-handle_edge_irq
         udpping-12642 [000] 27907.377792: ack_apic_edge <-handle_edge_irq
         udpping-12642 [000] 27907.377792: irq_to_desc <-ack_apic_edge
         udpping-12642 [000] 27907.377792: irq_complete_move <-ack_apic_edge
         udpping-12642 [000] 27907.377792: move_native_irq <-ack_apic_edge
         udpping-12642 [000] 27907.377792: irq_to_desc <-move_native_irq
         udpping-12642 [000] 27907.377792: native_apic_mem_write <-ack_apic_edge
         udpping-12642 [000] 27907.377793: handle_IRQ_event <-handle_edge_irq
         udpping-12642 [000] 27907.377793: bnx2_msi <-handle_IRQ_event
         udpping-12642 [000] 27907.377793: __napi_schedule <-bnx2_msi
         udpping-12642 [000] 27907.377793: note_interrupt <-handle_edge_irq
         udpping-12642 [000] 27907.377794: _spin_lock <-handle_edge_irq
         udpping-12642 [000] 27907.377794: irq_exit <-do_IRQ
         udpping-12642 [000] 27907.377794: do_softirq <-irq_exit
         udpping-12642 [000] 27907.377794: __do_softirq <-do_softirq
         udpping-12642 [000] 27907.377795: net_rx_action <-__do_softirq
         udpping-12642 [000] 27907.377795: bnx2_poll <-net_rx_action
         udpping-12642 [000] 27907.377795: bnx2_poll_work <-bnx2_poll
         udpping-12642 [000] 27907.377795: skb_dma_unmap <-bnx2_poll_work
         udpping-12642 [000] 27907.377795: consume_skb <-bnx2_poll_work
         udpping-12642 [000] 27907.377796: __kfree_skb <-consume_skb
         udpping-12642 [000] 27907.377796: skb_release_head_state <-__kfree_skb
         udpping-12642 [000] 27907.377796: dst_release <-skb_release_head_state
         udpping-12642 [000] 27907.377796: skb_release_data <-__kfree_skb
         udpping-12642 [000] 27907.377797: kfree <-skb_release_data
         udpping-12642 [000] 27907.377797: kmem_cache_free <-__kfree_skb
         udpping-12642 [000] 27907.377797: napi_complete <-bnx2_poll
         udpping-12642 [000] 27907.377797: napi_gro_flush <-napi_complete
         udpping-12642 [000] 27907.377798: __napi_complete <-napi_complete
         udpping-12642 [000] 27907.377798: rcu_bh_qsctr_inc <-__do_softirq
         udpping-12642 [000] 27907.377798: _local_bh_enable <-__do_softirq
         udpping-12642 [000] 27907.377799: schedule_timeout <-__skb_recv_datagram
         udpping-12642 [000] 27907.377799: schedule <-schedule_timeout
         udpping-12642 [000] 27907.377799: __schedule <-schedule
         udpping-12642 [000] 27907.377799: rcu_qsctr_inc <-__schedule
         udpping-12642 [000] 27907.377799: _spin_lock_irq <-__schedule
         udpping-12642 [000] 27907.377800: deactivate_task <-__schedule
         udpping-12642 [000] 27907.377800: dequeue_task <-deactivate_task
         udpping-12642 [000] 27907.377800: dequeue_task_fair <-dequeue_task
         udpping-12642 [000] 27907.377800: update_curr <-dequeue_task_fair
         udpping-12642 [000] 27907.377801: hrtick_update <-dequeue_task_fair
         udpping-12642 [000] 27907.377801: msecs_to_jiffies <-__schedule
         udpping-12642 [000] 27907.377801: msecs_to_jiffies <-__schedule
         udpping-12642 [000] 27907.377801: put_prev_task_fair <-__schedule
         udpping-12642 [000] 27907.377801: pick_next_task_fair <-__schedule
         udpping-12642 [000] 27907.377802: pick_next_task_rt <-__schedule
         udpping-12642 [000] 27907.377802: pick_next_task_fair <-__schedule
         udpping-12642 [000] 27907.377802: pick_next_task_idle <-__schedule
          <idle>-0     [000] 27907.377802: finish_task_switch <-__schedule
          <idle>-0     [000] 27907.377803: mwait_idle <-cpu_idle
          <idle>-0     [000] 27907.377851: do_IRQ <-common_interrupt
          <idle>-0     [000] 27907.377851: irq_enter <-do_IRQ
          <idle>-0     [000] 27907.377851: idle_cpu <-irq_enter
          <idle>-0     [000] 27907.377851: tick_check_idle <-irq_enter
          <idle>-0     [000] 27907.377851: tick_check_oneshot_broadcast <-tick_check_idle
          <idle>-0     [000] 27907.377852: handle_irq <-do_IRQ
          <idle>-0     [000] 27907.377852: irq_to_desc <-handle_irq
          <idle>-0     [000] 27907.377852: handle_edge_irq <-handle_irq
          <idle>-0     [000] 27907.377852: _spin_lock <-handle_edge_irq
          <idle>-0     [000] 27907.377852: ack_apic_edge <-handle_edge_irq
          <idle>-0     [000] 27907.377853: irq_to_desc <-ack_apic_edge
          <idle>-0     [000] 27907.377853: irq_complete_move <-ack_apic_edge
          <idle>-0     [000] 27907.377853: move_native_irq <-ack_apic_edge
          <idle>-0     [000] 27907.377853: irq_to_desc <-move_native_irq
          <idle>-0     [000] 27907.377853: native_apic_mem_write <-ack_apic_edge
          <idle>-0     [000] 27907.377854: handle_IRQ_event <-handle_edge_irq
          <idle>-0     [000] 27907.377854: bnx2_msi <-handle_IRQ_event
          <idle>-0     [000] 27907.377854: __napi_schedule <-bnx2_msi
          <idle>-0     [000] 27907.377854: note_interrupt <-handle_edge_irq
          <idle>-0     [000] 27907.377855: _spin_lock <-handle_edge_irq
          <idle>-0     [000] 27907.377855: irq_exit <-do_IRQ
          <idle>-0     [000] 27907.377855: do_softirq <-irq_exit
          <idle>-0     [000] 27907.377855: __do_softirq <-do_softirq
          <idle>-0     [000] 27907.377856: net_rx_action <-__do_softirq
          <idle>-0     [000] 27907.377856: bnx2_poll <-net_rx_action
          <idle>-0     [000] 27907.377856: bnx2_poll_work <-bnx2_poll
          <idle>-0     [000] 27907.377856: __netdev_alloc_skb <-bnx2_poll_work
          <idle>-0     [000] 27907.377856: __alloc_skb <-__netdev_alloc_skb
          <idle>-0     [000] 27907.377857: kmem_cache_alloc <-__alloc_skb
          <idle>-0     [000] 27907.377857: __kmalloc_track_caller <-__alloc_skb
          <idle>-0     [000] 27907.377857: get_slab <-__kmalloc_track_caller
          <idle>-0     [000] 27907.377857: skb_put <-bnx2_poll_work
          <idle>-0     [000] 27907.377858: eth_type_trans <-bnx2_poll_work
          <idle>-0     [000] 27907.377858: skb_pull <-eth_type_trans
          <idle>-0     [000] 27907.377858: __vlan_hwaccel_rx <-bnx2_poll_work
          <idle>-0     [000] 27907.377858: netif_receive_skb <-__vlan_hwaccel_rx
          <idle>-0     [000] 27907.377859: vlan_hwaccel_do_receive <-netif_receive_skb
          <idle>-0     [000] 27907.377859: netif_nit_deliver <-vlan_hwaccel_do_receive
          <idle>-0     [000] 27907.377859: ip_rcv <-netif_receive_skb
          <idle>-0     [000] 27907.377859: nf_hook_slow <-ip_rcv
          <idle>-0     [000] 27907.377859: nf_iterate <-nf_hook_slow
          <idle>-0     [000] 27907.377860: ip_rcv_finish <-ip_rcv
          <idle>-0     [000] 27907.377860: ip_route_input <-ip_rcv_finish
          <idle>-0     [000] 27907.377860: ip_local_deliver <-ip_rcv_finish
          <idle>-0     [000] 27907.377861: nf_hook_slow <-ip_local_deliver
          <idle>-0     [000] 27907.377861: nf_iterate <-nf_hook_slow
          <idle>-0     [000] 27907.377861: ipt_local_in_hook <-nf_iterate
          <idle>-0     [000] 27907.377861: ipt_do_table <-ipt_local_in_hook
          <idle>-0     [000] 27907.377861: local_bh_disable <-ipt_do_table
          <idle>-0     [000] 27907.377862: _read_lock <-ipt_do_table
          <idle>-0     [000] 27907.377862: _read_unlock_bh <-ipt_do_table
          <idle>-0     [000] 27907.377862: local_bh_enable_ip <-_read_unlock_bh
          <idle>-0     [000] 27907.377862: ip_local_deliver_finish <-ip_local_deliver
          <idle>-0     [000] 27907.377863: raw_local_deliver <-ip_local_deliver_finish
          <idle>-0     [000] 27907.377863: udp_rcv <-ip_local_deliver_finish
          <idle>-0     [000] 27907.377863: __udp4_lib_rcv <-udp_rcv
          <idle>-0     [000] 27907.377863: __udp4_lib_lookup <-__udp4_lib_rcv
          <idle>-0     [000] 27907.377863: udp_queue_rcv_skb <-__udp4_lib_rcv
          <idle>-0     [000] 27907.377864: _spin_lock <-udp_queue_rcv_skb
          <idle>-0     [000] 27907.377864: __udp_queue_rcv_skb <-udp_queue_rcv_skb
          <idle>-0     [000] 27907.377864: sock_queue_rcv_skb <-__udp_queue_rcv_skb
          <idle>-0     [000] 27907.377864: sk_filter <-sock_queue_rcv_skb
          <idle>-0     [000] 27907.377864: local_bh_disable <-sk_filter
          <idle>-0     [000] 27907.377865: local_bh_enable <-sk_filter
          <idle>-0     [000] 27907.377865: skb_queue_tail <-sock_queue_rcv_skb
          <idle>-0     [000] 27907.377865: smp_apic_timer_interrupt <-apic_timer_interrupt
          <idle>-0     [000] 27907.377865: native_apic_mem_write <-smp_apic_timer_interrupt
          <idle>-0     [000] 27907.377866: irq_enter <-smp_apic_timer_interrupt
          <idle>-0     [000] 27907.377866: idle_cpu <-irq_enter
          <idle>-0     [000] 27907.377866: hrtimer_interrupt <-smp_apic_timer_interrupt
          <idle>-0     [000] 27907.377866: ktime_get <-hrtimer_interrupt
          <idle>-0     [000] 27907.377867: ktime_get_ts <-ktime_get
          <idle>-0     [000] 27907.377867: getnstimeofday <-ktime_get_ts
          <idle>-0     [000] 27907.377867: set_normalized_timespec <-ktime_get_ts
          <idle>-0     [000] 27907.377867: _spin_lock <-hrtimer_interrupt
          <idle>-0     [000] 27907.377867: _spin_lock <-hrtimer_interrupt
          <idle>-0     [000] 27907.377868: __run_hrtimer <-hrtimer_interrupt
          <idle>-0     [000] 27907.377868: __remove_hrtimer <-__run_hrtimer
          <idle>-0     [000] 27907.377868: timer_stats_update_stats <-__run_hrtimer
          <idle>-0     [000] 27907.377868: tick_sched_timer <-__run_hrtimer
          <idle>-0     [000] 27907.377869: ktime_get <-tick_sched_timer
          <idle>-0     [000] 27907.377869: ktime_get_ts <-ktime_get
          <idle>-0     [000] 27907.377869: getnstimeofday <-ktime_get_ts
          <idle>-0     [000] 27907.377869: set_normalized_timespec <-ktime_get_ts
          <idle>-0     [000] 27907.377869: _spin_lock <-tick_sched_timer
          <idle>-0     [000] 27907.377870: do_timer <-tick_sched_timer
          <idle>-0     [000] 27907.377870: update_wall_time <-do_timer
          <idle>-0     [000] 27907.377871: clocksource_get_next <-update_wall_time
          <idle>-0     [000] 27907.377871: _spin_lock_irqsave <-clocksource_get_next
          <idle>-0     [000] 27907.377871: _spin_unlock_irqrestore <-clocksource_get_next
          <idle>-0     [000] 27907.377872: update_process_times <-tick_sched_timer
          <idle>-0     [000] 27907.377872: account_process_tick <-update_process_times
          <idle>-0     [000] 27907.377872: account_idle_time <-account_process_tick
          <idle>-0     [000] 27907.377872: run_local_timers <-update_process_times
          <idle>-0     [000] 27907.377872: hrtimer_run_queues <-run_local_timers
          <idle>-0     [000] 27907.377873: raise_softirq <-run_local_timers
          <idle>-0     [000] 27907.377873: softlockup_tick <-run_local_timers
          <idle>-0     [000] 27907.377873: rcu_pending <-update_process_times
          <idle>-0     [000] 27907.377873: rcu_check_callbacks <-update_process_times
          <idle>-0     [000] 27907.377874: idle_cpu <-rcu_check_callbacks
          <idle>-0     [000] 27907.377874: raise_softirq <-rcu_check_callbacks
          <idle>-0     [000] 27907.377874: printk_tick <-update_process_times
          <idle>-0     [000] 27907.377874: scheduler_tick <-update_process_times
          <idle>-0     [000] 27907.377875: ktime_get <-sched_clock_tick
          <idle>-0     [000] 27907.377875: ktime_get_ts <-ktime_get
          <idle>-0     [000] 27907.377875: getnstimeofday <-ktime_get_ts
          <idle>-0     [000] 27907.377875: set_normalized_timespec <-ktime_get_ts
          <idle>-0     [000] 27907.377875: _spin_lock <-scheduler_tick
          <idle>-0     [000] 27907.377876: task_tick_idle <-scheduler_tick
          <idle>-0     [000] 27907.377876: run_posix_cpu_timers <-update_process_times
          <idle>-0     [000] 27907.377876: profile_tick <-tick_sched_timer
          <idle>-0     [000] 27907.377876: hrtimer_forward <-tick_sched_timer
          <idle>-0     [000] 27907.377877: ktime_add_safe <-hrtimer_forward
          <idle>-0     [000] 27907.377877: ktime_add_safe <-hrtimer_forward
          <idle>-0     [000] 27907.377877: _spin_lock <-__run_hrtimer
          <idle>-0     [000] 27907.377877: enqueue_hrtimer <-__run_hrtimer
          <idle>-0     [000] 27907.377878: tick_program_event <-hrtimer_interrupt
          <idle>-0     [000] 27907.377878: tick_dev_program_event <-tick_program_event
          <idle>-0     [000] 27907.377878: ktime_get <-tick_dev_program_event
          <idle>-0     [000] 27907.377878: ktime_get_ts <-ktime_get
          <idle>-0     [000] 27907.377878: getnstimeofday <-ktime_get_ts
          <idle>-0     [000] 27907.377879: set_normalized_timespec <-ktime_get_ts
          <idle>-0     [000] 27907.377879: clockevents_program_event <-tick_dev_program_event
          <idle>-0     [000] 27907.377879: lapic_next_event <-clockevents_program_event
          <idle>-0     [000] 27907.377879: native_apic_mem_write <-lapic_next_event
          <idle>-0     [000] 27907.377880: irq_exit <-smp_apic_timer_interrupt
          <idle>-0     [000] 27907.377880: _spin_lock_irqsave <-skb_queue_tail
          <idle>-0     [000] 27907.377880: _spin_unlock_irqrestore <-skb_queue_tail
          <idle>-0     [000] 27907.377880: sock_def_readable <-sock_queue_rcv_skb
          <idle>-0     [000] 27907.377881: _read_lock <-sock_def_readable
          <idle>-0     [000] 27907.377881: __wake_up_sync_key <-sock_def_readable
          <idle>-0     [000] 27907.377881: _spin_lock_irqsave <-__wake_up_sync_key
          <idle>-0     [000] 27907.377881: __wake_up_common <-__wake_up_sync_key
          <idle>-0     [000] 27907.377881: autoremove_wake_function <-__wake_up_common
          <idle>-0     [000] 27907.377882: default_wake_function <-autoremove_wake_function
          <idle>-0     [000] 27907.377882: try_to_wake_up <-default_wake_function
          <idle>-0     [000] 27907.377882: task_rq_lock <-try_to_wake_up
          <idle>-0     [000] 27907.377882: _spin_lock <-task_rq_lock
          <idle>-0     [000] 27907.377883: select_task_rq_fair <-try_to_wake_up
          <idle>-0     [000] 27907.377883: activate_task <-try_to_wake_up
          <idle>-0     [000] 27907.377883: enqueue_task <-activate_task
          <idle>-0     [000] 27907.377884: enqueue_task_fair <-enqueue_task
          <idle>-0     [000] 27907.377884: update_curr <-enqueue_task_fair
          <idle>-0     [000] 27907.377884: place_entity <-enqueue_task_fair
          <idle>-0     [000] 27907.377884: __enqueue_entity <-enqueue_task_fair
          <idle>-0     [000] 27907.377885: hrtick_update <-enqueue_task_fair
          <idle>-0     [000] 27907.377885: check_preempt_curr_idle <-try_to_wake_up
          <idle>-0     [000] 27907.377885: resched_task <-check_preempt_curr_idle
          <idle>-0     [000] 27907.377885: _spin_unlock_irqrestore <-try_to_wake_up
          <idle>-0     [000] 27907.377886: _spin_unlock_irqrestore <-__wake_up_sync_key
          <idle>-0     [000] 27907.377886: napi_complete <-bnx2_poll
          <idle>-0     [000] 27907.377886: napi_gro_flush <-napi_complete
          <idle>-0     [000] 27907.377887: __napi_complete <-napi_complete
          <idle>-0     [000] 27907.377887: rcu_bh_qsctr_inc <-__do_softirq
          <idle>-0     [000] 27907.377887: run_timer_softirq <-__do_softirq
          <idle>-0     [000] 27907.377887: hrtimer_run_pending <-run_timer_softirq
          <idle>-0     [000] 27907.377888: _spin_lock_irq <-run_timer_softirq
          <idle>-0     [000] 27907.377888: rcu_bh_qsctr_inc <-__do_softirq
          <idle>-0     [000] 27907.377888: rcu_process_callbacks <-__do_softirq
          <idle>-0     [000] 27907.377888: __rcu_process_callbacks <-rcu_process_callbacks
          <idle>-0     [000] 27907.377889: cpu_quiet <-__rcu_process_callbacks
          <idle>-0     [000] 27907.377889: _spin_lock_irqsave <-cpu_quiet
          <idle>-0     [000] 27907.377889: cpu_quiet_msk <-cpu_quiet
          <idle>-0     [000] 27907.377889: _spin_unlock_irqrestore <-cpu_quiet_msk
          <idle>-0     [000] 27907.377890: __rcu_process_callbacks <-rcu_process_callbacks
          <idle>-0     [000] 27907.377890: force_quiescent_state <-__rcu_process_callbacks
          <idle>-0     [000] 27907.377890: rcu_bh_qsctr_inc <-__do_softirq
          <idle>-0     [000] 27907.377890: _local_bh_enable <-__do_softirq
          <idle>-0     [000] 27907.377891: schedule <-cpu_idle
          <idle>-0     [000] 27907.377891: __schedule <-schedule
          <idle>-0     [000] 27907.377891: rcu_qsctr_inc <-__schedule
          <idle>-0     [000] 27907.377891: _spin_lock_irq <-__schedule
          <idle>-0     [000] 27907.377892: put_prev_task_idle <-__schedule
          <idle>-0     [000] 27907.377892: pick_next_task_fair <-__schedule
          <idle>-0     [000] 27907.377892: __dequeue_entity <-pick_next_task_fair

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/bnx2.c b/drivers/net/bnx2.c
index d478391..1078c85 100644
--- a/drivers/net/bnx2.c
+++ b/drivers/net/bnx2.c
@@ -6168,7 +6168,13 @@  bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
 
 	tx_buf = &txr->tx_buf_ring[ring_prod];
 	tx_buf->skb = skb;
-
+	dst_release(skb->dst);
+	skb->dst = NULL;
+	if (skb->destructor == sock_wfree) {
+		sock_wfree(skb);
+		skb->destructor = NULL;
+	}
+	
 	txbd = &txr->tx_desc_ring[ring_prod];
 
 	txbd->tx_bd_haddr_hi = (u64) mapping >> 32;