diff mbox series

[v2,bpf-next,1/2] xdp: Add tracepoint for bulk XDP_TX

Message ID 20190605053613.22888-2-toshiaki.makita1@gmail.com
State Changes Requested
Delegated to: BPF Maintainers
Headers show
Series veth: Bulk XDP_TX | expand

Commit Message

Toshiaki Makita June 5, 2019, 5:36 a.m. UTC
This is introduced for admins to check what is happening on XDP_TX when
bulk XDP_TX is in use, which will be first introduced in veth in next
commit.

Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
---
 include/trace/events/xdp.h | 25 +++++++++++++++++++++++++
 kernel/bpf/core.c          |  1 +
 2 files changed, 26 insertions(+)

Comments

Jesper Dangaard Brouer June 5, 2019, 7:59 a.m. UTC | #1
On Wed,  5 Jun 2019 14:36:12 +0900
Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:

> This is introduced for admins to check what is happening on XDP_TX when
> bulk XDP_TX is in use, which will be first introduced in veth in next
> commit.

Is the plan that this tracepoint 'xdp:xdp_bulk_tx' should be used by
all drivers?

(more below)

> Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
> ---
>  include/trace/events/xdp.h | 25 +++++++++++++++++++++++++
>  kernel/bpf/core.c          |  1 +
>  2 files changed, 26 insertions(+)
> 
> diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
> index e95cb86..e06ea65 100644
> --- a/include/trace/events/xdp.h
> +++ b/include/trace/events/xdp.h
> @@ -50,6 +50,31 @@
>  		  __entry->ifindex)
>  );
>  
> +TRACE_EVENT(xdp_bulk_tx,
> +
> +	TP_PROTO(const struct net_device *dev,
> +		 int sent, int drops, int err),
> +
> +	TP_ARGS(dev, sent, drops, err),
> +
> +	TP_STRUCT__entry(

All other tracepoints in this file starts with:

		__field(int, prog_id)
		__field(u32, act)
or
		__field(int, map_id)
		__field(u32, act)

Could you please add those?

> +		__field(int, ifindex)
> +		__field(int, drops)
> +		__field(int, sent)
> +		__field(int, err)
> +	),

The reason is that this make is easier to attach to multiple
tracepoints, and extract the same value.

Example with bpftrace oneliner:

$ sudo bpftrace -e 'tracepoint:xdp:xdp_* { @action[args->act] = count(); }'
Attaching 8 probes...
^C

@action[4]: 30259246
@action[0]: 34489024

XDP_ABORTED = 0 	 
XDP_REDIRECT= 4


> +
> +	TP_fast_assign(

		__entry->act		= XDP_TX;

> +		__entry->ifindex	= dev->ifindex;
> +		__entry->drops		= drops;
> +		__entry->sent		= sent;
> +		__entry->err		= err;
> +	),
> +
> +	TP_printk("ifindex=%d sent=%d drops=%d err=%d",
> +		  __entry->ifindex, __entry->sent, __entry->drops, __entry->err)
> +);
> +

Other fun bpftrace stuff:

sudo bpftrace -e 'tracepoint:xdp:xdp_*map* { @map_id[comm, args->map_id] = count(); }'
Attaching 5 probes...
^C

@map_id[swapper/2, 113]: 1428
@map_id[swapper/0, 113]: 2085
@map_id[ksoftirqd/4, 113]: 2253491
@map_id[ksoftirqd/2, 113]: 25677560
@map_id[ksoftirqd/0, 113]: 29004338
@map_id[ksoftirqd/3, 113]: 31034885


$ bpftool map list id 113
113: devmap  name tx_port  flags 0x0
	key 4B  value 4B  max_entries 100  memlock 4096B


p.s. People should look out for Brendan Gregg's upcoming book on BPF
performance tools, from which I learned to use bpftrace :-)
Toshiaki Makita June 6, 2019, 11:04 a.m. UTC | #2
On 2019/06/05 16:59, Jesper Dangaard Brouer wrote:
> On Wed,  5 Jun 2019 14:36:12 +0900
> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> 
>> This is introduced for admins to check what is happening on XDP_TX when
>> bulk XDP_TX is in use, which will be first introduced in veth in next
>> commit.
> 
> Is the plan that this tracepoint 'xdp:xdp_bulk_tx' should be used by
> all drivers?

I guess you mean all drivers that implement similar mechanism should use 
this? Then yes.
(I don't think all drivers needs bulk tx mechanism though)

> (more below)
> 
>> Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
>> ---
>>   include/trace/events/xdp.h | 25 +++++++++++++++++++++++++
>>   kernel/bpf/core.c          |  1 +
>>   2 files changed, 26 insertions(+)
>>
>> diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
>> index e95cb86..e06ea65 100644
>> --- a/include/trace/events/xdp.h
>> +++ b/include/trace/events/xdp.h
>> @@ -50,6 +50,31 @@
>>   		  __entry->ifindex)
>>   );
>>   
>> +TRACE_EVENT(xdp_bulk_tx,
>> +
>> +	TP_PROTO(const struct net_device *dev,
>> +		 int sent, int drops, int err),
>> +
>> +	TP_ARGS(dev, sent, drops, err),
>> +
>> +	TP_STRUCT__entry(
> 
> All other tracepoints in this file starts with:
> 
> 		__field(int, prog_id)
> 		__field(u32, act)
> or
> 		__field(int, map_id)
> 		__field(u32, act)
> 
> Could you please add those?

So... prog_id is the problem. The program can be changed while we are 
enqueueing packets to the bulk queue, so the prog_id at flush may be an 
unexpected one.

It can be fixed by disabling NAPI when changing XDP programs. This stops 
packet processing while changing XDP programs, but I guess it is an 
acceptable compromise. Having said that, I'm honestly not so eager to 
make this change, since this will require refurbishment of one of the 
most delicate part of veth XDP, NAPI disabling/enabling mechanism.

WDYT?

>> +		__field(int, ifindex)
>> +		__field(int, drops)
>> +		__field(int, sent)
>> +		__field(int, err)
>> +	),
> 
> The reason is that this make is easier to attach to multiple
> tracepoints, and extract the same value.
> 
> Example with bpftrace oneliner:
> 
> $ sudo bpftrace -e 'tracepoint:xdp:xdp_* { @action[args->act] = count(); }'
> Attaching 8 probes...
> ^C
> 
> @action[4]: 30259246
> @action[0]: 34489024
> 
> XDP_ABORTED = 0 	
> XDP_REDIRECT= 4
> 
> 
>> +
>> +	TP_fast_assign(
> 
> 		__entry->act		= XDP_TX;

OK

> 
>> +		__entry->ifindex	= dev->ifindex;
>> +		__entry->drops		= drops;
>> +		__entry->sent		= sent;
>> +		__entry->err		= err;
>> +	),
>> +
>> +	TP_printk("ifindex=%d sent=%d drops=%d err=%d",
>> +		  __entry->ifindex, __entry->sent, __entry->drops, __entry->err)
>> +);
>> +
> 
> Other fun bpftrace stuff:
> 
> sudo bpftrace -e 'tracepoint:xdp:xdp_*map* { @map_id[comm, args->map_id] = count(); }'
> Attaching 5 probes...
> ^C
> 
> @map_id[swapper/2, 113]: 1428
> @map_id[swapper/0, 113]: 2085
> @map_id[ksoftirqd/4, 113]: 2253491
> @map_id[ksoftirqd/2, 113]: 25677560
> @map_id[ksoftirqd/0, 113]: 29004338
> @map_id[ksoftirqd/3, 113]: 31034885
> 
> 
> $ bpftool map list id 113
> 113: devmap  name tx_port  flags 0x0
> 	key 4B  value 4B  max_entries 100  memlock 4096B
> 
> 
> p.s. People should look out for Brendan Gregg's upcoming book on BPF
> performance tools, from which I learned to use bpftrace :-)

Where can I get information on the book?

--
Toshiaki Makita
Jesper Dangaard Brouer June 6, 2019, 7:41 p.m. UTC | #3
On Thu, 6 Jun 2019 20:04:20 +0900
Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:

> On 2019/06/05 16:59, Jesper Dangaard Brouer wrote:
> > On Wed,  5 Jun 2019 14:36:12 +0900
> > Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> >   
> >> This is introduced for admins to check what is happening on XDP_TX when
> >> bulk XDP_TX is in use, which will be first introduced in veth in next
> >> commit.  
> > 
> > Is the plan that this tracepoint 'xdp:xdp_bulk_tx' should be used by
> > all drivers?  
> 
> I guess you mean all drivers that implement similar mechanism should use 
> this? Then yes.
> (I don't think all drivers needs bulk tx mechanism though)
> 
> > (more below)
> >   
> >> Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
> >> ---
> >>   include/trace/events/xdp.h | 25 +++++++++++++++++++++++++
> >>   kernel/bpf/core.c          |  1 +
> >>   2 files changed, 26 insertions(+)
> >>
> >> diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
> >> index e95cb86..e06ea65 100644
> >> --- a/include/trace/events/xdp.h
> >> +++ b/include/trace/events/xdp.h
> >> @@ -50,6 +50,31 @@
> >>   		  __entry->ifindex)
> >>   );
> >>   
> >> +TRACE_EVENT(xdp_bulk_tx,
> >> +
> >> +	TP_PROTO(const struct net_device *dev,
> >> +		 int sent, int drops, int err),
> >> +
> >> +	TP_ARGS(dev, sent, drops, err),
> >> +
> >> +	TP_STRUCT__entry(  
> > 
> > All other tracepoints in this file starts with:
> > 
> > 		__field(int, prog_id)
> > 		__field(u32, act)
> > or
> > 		__field(int, map_id)
> > 		__field(u32, act)
> > 
> > Could you please add those?  
> 
> So... prog_id is the problem. The program can be changed while we are 
> enqueueing packets to the bulk queue, so the prog_id at flush may be an 
> unexpected one.

Hmmm... that sounds problematic, if the XDP bpf_prog for veth can
change underneath, before the flush.  Our redirect system, depend on
things being stable until the xdp_do_flush_map() operation, as will
e.g. set per-CPU (bpf_redirect_info) map_to_flush pointer (which depend
on XDP prog), and expect it to be correct/valid.


> It can be fixed by disabling NAPI when changing XDP programs. This stops 
> packet processing while changing XDP programs, but I guess it is an 
> acceptable compromise. Having said that, I'm honestly not so eager to 
> make this change, since this will require refurbishment of one of the 
> most delicate part of veth XDP, NAPI disabling/enabling mechanism.
> 
> WDYT?

Sound like a bug, if XDP bpf_prog is not stable within the NAPI poll...

 
> >> +		__field(int, ifindex)
> >> +		__field(int, drops)
> >> +		__field(int, sent)
> >> +		__field(int, err)
> >> +	),  
> > 
> > The reason is that this make is easier to attach to multiple
> > tracepoints, and extract the same value.
> > 
> > Example with bpftrace oneliner:
> > 
> > $ sudo bpftrace -e 'tracepoint:xdp:xdp_* { @action[args->act] = count(); }'
Toshiaki Makita June 7, 2019, 2:22 a.m. UTC | #4
On 2019/06/07 4:41, Jesper Dangaard Brouer wrote:
> On Thu, 6 Jun 2019 20:04:20 +0900
> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> 
>> On 2019/06/05 16:59, Jesper Dangaard Brouer wrote:
>>> On Wed,  5 Jun 2019 14:36:12 +0900
>>> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
>>>    
>>>> This is introduced for admins to check what is happening on XDP_TX when
>>>> bulk XDP_TX is in use, which will be first introduced in veth in next
>>>> commit.
>>>
>>> Is the plan that this tracepoint 'xdp:xdp_bulk_tx' should be used by
>>> all drivers?
>>
>> I guess you mean all drivers that implement similar mechanism should use
>> this? Then yes.
>> (I don't think all drivers needs bulk tx mechanism though)
>>
>>> (more below)
>>>    
>>>> Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
>>>> ---
>>>>    include/trace/events/xdp.h | 25 +++++++++++++++++++++++++
>>>>    kernel/bpf/core.c          |  1 +
>>>>    2 files changed, 26 insertions(+)
>>>>
>>>> diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
>>>> index e95cb86..e06ea65 100644
>>>> --- a/include/trace/events/xdp.h
>>>> +++ b/include/trace/events/xdp.h
>>>> @@ -50,6 +50,31 @@
>>>>    		  __entry->ifindex)
>>>>    );
>>>>    
>>>> +TRACE_EVENT(xdp_bulk_tx,
>>>> +
>>>> +	TP_PROTO(const struct net_device *dev,
>>>> +		 int sent, int drops, int err),
>>>> +
>>>> +	TP_ARGS(dev, sent, drops, err),
>>>> +
>>>> +	TP_STRUCT__entry(
>>>
>>> All other tracepoints in this file starts with:
>>>
>>> 		__field(int, prog_id)
>>> 		__field(u32, act)
>>> or
>>> 		__field(int, map_id)
>>> 		__field(u32, act)
>>>
>>> Could you please add those?
>>
>> So... prog_id is the problem. The program can be changed while we are
>> enqueueing packets to the bulk queue, so the prog_id at flush may be an
>> unexpected one.
> 
> Hmmm... that sounds problematic, if the XDP bpf_prog for veth can
> change underneath, before the flush.  Our redirect system, depend on
> things being stable until the xdp_do_flush_map() operation, as will
> e.g. set per-CPU (bpf_redirect_info) map_to_flush pointer (which depend
> on XDP prog), and expect it to be correct/valid.

Sorry, I don't get how maps depend on programs.
At least xdp_do_redirect_map() handles map_to_flush change during NAPI. 
Is there a problem when the map is not changed but the program is changed?
Also I believe this is not veth-specific behavior. Looking at tun and 
i40e, they seem to change xdp_prog without stopping data path.

>> It can be fixed by disabling NAPI when changing XDP programs. This stops
>> packet processing while changing XDP programs, but I guess it is an
>> acceptable compromise. Having said that, I'm honestly not so eager to
>> make this change, since this will require refurbishment of one of the
>> most delicate part of veth XDP, NAPI disabling/enabling mechanism.
>>
>> WDYT?
> 
> Sound like a bug, if XDP bpf_prog is not stable within the NAPI poll...
> 
>   
>>>> +		__field(int, ifindex)
>>>> +		__field(int, drops)
>>>> +		__field(int, sent)
>>>> +		__field(int, err)
>>>> +	),
>>>
>>> The reason is that this make is easier to attach to multiple
>>> tracepoints, and extract the same value.
>>>
>>> Example with bpftrace oneliner:
>>>
>>> $ sudo bpftrace -e 'tracepoint:xdp:xdp_* { @action[args->act] = count(); }'
>
Jesper Dangaard Brouer June 7, 2019, 9:32 a.m. UTC | #5
On Fri, 7 Jun 2019 11:22:00 +0900
Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:

> On 2019/06/07 4:41, Jesper Dangaard Brouer wrote:
> > On Thu, 6 Jun 2019 20:04:20 +0900
> > Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> >   
> >> On 2019/06/05 16:59, Jesper Dangaard Brouer wrote:  
> >>> On Wed,  5 Jun 2019 14:36:12 +0900
> >>> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> >>>      
[...]
> >>
> >> So... prog_id is the problem. The program can be changed while we are
> >> enqueueing packets to the bulk queue, so the prog_id at flush may be an
> >> unexpected one.  
> > 
> > Hmmm... that sounds problematic, if the XDP bpf_prog for veth can
> > change underneath, before the flush.  Our redirect system, depend on
> > things being stable until the xdp_do_flush_map() operation, as will
> > e.g. set per-CPU (bpf_redirect_info) map_to_flush pointer (which depend
> > on XDP prog), and expect it to be correct/valid.  
> 
> Sorry, I don't get how maps depend on programs.

BPF/XDP programs have a reference count on the map (e.g. used for
redirect) and when the XDP is removed, and last refcnt for the map is
reached, then the map is also removed (redirect maps does a call_rcu
when shutdown).

> At least xdp_do_redirect_map() handles map_to_flush change during NAPI. 
> Is there a problem when the map is not changed but the program is changed?
> Also I believe this is not veth-specific behavior. Looking at tun and 
> i40e, they seem to change xdp_prog without stopping data path.
 
I guess this could actually happen, but we are "saved" by the
'map_to_flush' (pointer) is still valid due to RCU protection.

But it does look fishy, as our rcu_read_lock's does not encapsulation
this. There is RCU-read-section in veth_xdp_rcv_skb(), which via can
call xdp_do_redirect() which set per-CPU ri->map_to_flush.  

Do we get this protection by running under softirq, and does this
prevent an RCU grace-period (call_rcu callbacks) from happening?
(between veth_xdp_rcv_skb() and xdp_do_flush_map() in veth_poll())


To Toshiaki, regarding your patch 2/2, you are not affected by this
per-CPU map storing, as you pass along the bulk-queue.  I do see you
point, with prog_id could change.  Could you change the tracepoint to
include the 'act' and place 'ifindex' above this in the struct, this way
the 'act' member is in the same location/offset as other XDP
tracepoints.  I see the 'ifindex' as the identifier for this tracepoint
(other have map_id or prog_id in this location).
Toshiaki Makita June 10, 2019, 11:41 a.m. UTC | #6
On 2019/06/07 18:32, Jesper Dangaard Brouer wrote:
> On Fri, 7 Jun 2019 11:22:00 +0900
> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> 
>> On 2019/06/07 4:41, Jesper Dangaard Brouer wrote:
>>> On Thu, 6 Jun 2019 20:04:20 +0900
>>> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
>>>    
>>>> On 2019/06/05 16:59, Jesper Dangaard Brouer wrote:
>>>>> On Wed,  5 Jun 2019 14:36:12 +0900
>>>>> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
>>>>>       
> [...]
>>>>
>>>> So... prog_id is the problem. The program can be changed while we are
>>>> enqueueing packets to the bulk queue, so the prog_id at flush may be an
>>>> unexpected one.
>>>
>>> Hmmm... that sounds problematic, if the XDP bpf_prog for veth can
>>> change underneath, before the flush.  Our redirect system, depend on
>>> things being stable until the xdp_do_flush_map() operation, as will
>>> e.g. set per-CPU (bpf_redirect_info) map_to_flush pointer (which depend
>>> on XDP prog), and expect it to be correct/valid.
>>
>> Sorry, I don't get how maps depend on programs.
> 
> BPF/XDP programs have a reference count on the map (e.g. used for
> redirect) and when the XDP is removed, and last refcnt for the map is
> reached, then the map is also removed (redirect maps does a call_rcu
> when shutdown).

Thanks, now I understand what you mean.

>> At least xdp_do_redirect_map() handles map_to_flush change during NAPI.
>> Is there a problem when the map is not changed but the program is changed?
>> Also I believe this is not veth-specific behavior. Looking at tun and
>> i40e, they seem to change xdp_prog without stopping data path.
>   
> I guess this could actually happen, but we are "saved" by the
> 'map_to_flush' (pointer) is still valid due to RCU protection.
> 
> But it does look fishy, as our rcu_read_lock's does not encapsulation
> this. There is RCU-read-section in veth_xdp_rcv_skb(), which via can
> call xdp_do_redirect() which set per-CPU ri->map_to_flush.
> 
> Do we get this protection by running under softirq, and does this
> prevent an RCU grace-period (call_rcu callbacks) from happening?
> (between veth_xdp_rcv_skb() and xdp_do_flush_map() in veth_poll())

We are trying to avoid the problem in dev_map_free()?

	/* To ensure all pending flush operations have completed wait for flush
	 * bitmap to indicate all flush_needed bits to be zero on _all_ cpus.
	 * Because the above synchronize_rcu() ensures the map is disconnected
	 * from the program we can assume no new bits will be set.
	 */
	for_each_online_cpu(cpu) {
		unsigned long *bitmap = per_cpu_ptr(dtab->flush_needed, cpu);

		while (!bitmap_empty(bitmap, dtab->map.max_entries))
			cond_resched();
	}

Not sure if this is working as expected.

> 
> 
> To Toshiaki, regarding your patch 2/2, you are not affected by this
> per-CPU map storing, as you pass along the bulk-queue.  I do see you
> point, with prog_id could change.  Could you change the tracepoint to
> include the 'act' and place 'ifindex' above this in the struct, this way
> the 'act' member is in the same location/offset as other XDP
> tracepoints.  I see the 'ifindex' as the identifier for this tracepoint
> (other have map_id or prog_id in this location).

Sure, thanks.

Toshiaki Makita
diff mbox series

Patch

diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
index e95cb86..e06ea65 100644
--- a/include/trace/events/xdp.h
+++ b/include/trace/events/xdp.h
@@ -50,6 +50,31 @@ 
 		  __entry->ifindex)
 );
 
+TRACE_EVENT(xdp_bulk_tx,
+
+	TP_PROTO(const struct net_device *dev,
+		 int sent, int drops, int err),
+
+	TP_ARGS(dev, sent, drops, err),
+
+	TP_STRUCT__entry(
+		__field(int, ifindex)
+		__field(int, drops)
+		__field(int, sent)
+		__field(int, err)
+	),
+
+	TP_fast_assign(
+		__entry->ifindex	= dev->ifindex;
+		__entry->drops		= drops;
+		__entry->sent		= sent;
+		__entry->err		= err;
+	),
+
+	TP_printk("ifindex=%d sent=%d drops=%d err=%d",
+		  __entry->ifindex, __entry->sent, __entry->drops, __entry->err)
+);
+
 DECLARE_EVENT_CLASS(xdp_redirect_template,
 
 	TP_PROTO(const struct net_device *dev,
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 33fb292..3a3f4af 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2106,3 +2106,4 @@  int __weak skb_copy_bits(const struct sk_buff *skb, int offset, void *to,
 #include <linux/bpf_trace.h>
 
 EXPORT_TRACEPOINT_SYMBOL_GPL(xdp_exception);
+EXPORT_TRACEPOINT_SYMBOL_GPL(xdp_bulk_tx);