mbox series

[bpf-next,0/8] Simplify xdp_do_redirect_map()/xdp_do_flush_map() and XDP maps

Message ID 20191218105400.2895-1-bjorn.topel@gmail.com
Headers show
Series Simplify xdp_do_redirect_map()/xdp_do_flush_map() and XDP maps | expand

Message

Björn Töpel Dec. 18, 2019, 10:53 a.m. UTC
This series aims to simplify the XDP maps and
xdp_do_redirect_map()/xdp_do_flush_map(), and to crank out some more
performance from XDP_REDIRECT scenarios.

The first part of the series simplifies all XDP_REDIRECT capable maps,
so that __XXX_flush_map() does not require the map parameter, by
moving the flush list from the map to global scope.

This results in that the map_to_flush member can be removed from
struct bpf_redirect_info, and its corresponding logic.

Simpler code, and more performance due to that checks/code per-packet
is moved to flush.

Pre-series performance:
  $ sudo taskset -c 22 ./xdpsock -i enp134s0f0 -q 20 -n 1 -r -z
  
   sock0@enp134s0f0:20 rxdrop xdp-drv 
                  pps         pkts        1.00       
  rx              20,797,350  230,942,399
  tx              0           0          
  
  $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
  
  Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
  XDP-cpumap      CPU:to  pps            drop-pps    extra-info
  XDP-RX          20      7723038        0           0
  XDP-RX          total   7723038        0
  cpumap_kthread  total   0              0           0
  redirect_err    total   0              0
  xdp_exception   total   0              0

Post-series performance:
  $ sudo taskset -c 22 ./xdpsock -i enp134s0f0 -q 20 -n 1 -r -z

   sock0@enp134s0f0:20 rxdrop xdp-drv 
                  pps         pkts        1.00       
  rx              21,524,979  86,835,327 
  tx              0           0          
  
  $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0

  Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
  XDP-cpumap      CPU:to  pps            drop-pps    extra-info
  XDP-RX          20      7840124        0           0          
  XDP-RX          total   7840124        0          
  cpumap_kthread  total   0              0           0          
  redirect_err    total   0              0          
  xdp_exception   total   0              0          
  
Results: +3.5% and +1.5% for the ubenchmarks.

Björn Töpel (8):
  xdp: simplify devmap cleanup
  xdp: simplify cpumap cleanup
  xdp: fix graze->grace type-o in cpumap comments
  xsk: make xskmap flush_list common for all map instances
  xdp: make devmap flush_list common for all map instances
  xdp: make cpumap flush_list common for all map instances
  xdp: remove map_to_flush and map swap detection
  xdp: simplify __bpf_tx_xdp_map()

 include/linux/bpf.h    |  8 ++---
 include/linux/filter.h |  1 -
 include/net/xdp_sock.h | 11 +++---
 kernel/bpf/cpumap.c    | 76 ++++++++++++++--------------------------
 kernel/bpf/devmap.c    | 78 ++++++++++--------------------------------
 kernel/bpf/xskmap.c    | 16 ++-------
 net/core/filter.c      | 63 ++++++----------------------------
 net/xdp/xsk.c          | 17 ++++-----
 8 files changed, 74 insertions(+), 196 deletions(-)

Comments

Jesper Dangaard Brouer Dec. 18, 2019, 11:11 a.m. UTC | #1
On Wed, 18 Dec 2019 11:53:52 +0100
Björn Töpel <bjorn.topel@gmail.com> wrote:

>   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
>   
>   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
>   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
>   XDP-RX          20      7723038        0           0
>   XDP-RX          total   7723038        0
>   cpumap_kthread  total   0              0           0
>   redirect_err    total   0              0
>   xdp_exception   total   0              0

Hmm... I'm missing some counters on the kthread side.
Björn Töpel Dec. 18, 2019, 11:39 a.m. UTC | #2
On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
>
> On Wed, 18 Dec 2019 11:53:52 +0100
> Björn Töpel <bjorn.topel@gmail.com> wrote:
>
> >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> >
> >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> >   XDP-RX          20      7723038        0           0
> >   XDP-RX          total   7723038        0
> >   cpumap_kthread  total   0              0           0
> >   redirect_err    total   0              0
> >   xdp_exception   total   0              0
>
> Hmm... I'm missing some counters on the kthread side.
>

Oh? Any ideas why? I just ran the upstream sample straight off.
Jesper Dangaard Brouer Dec. 18, 2019, 12:03 p.m. UTC | #3
On Wed, 18 Dec 2019 12:39:53 +0100
Björn Töpel <bjorn.topel@gmail.com> wrote:

> On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> >
> > On Wed, 18 Dec 2019 11:53:52 +0100
> > Björn Töpel <bjorn.topel@gmail.com> wrote:
> >  
> > >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> > >
> > >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> > >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> > >   XDP-RX          20      7723038        0           0
> > >   XDP-RX          total   7723038        0
> > >   cpumap_kthread  total   0              0           0
> > >   redirect_err    total   0              0
> > >   xdp_exception   total   0              0  
> >
> > Hmm... I'm missing some counters on the kthread side.
> >  
> 
> Oh? Any ideas why? I just ran the upstream sample straight off.

Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
XDP samples to libbpf usage") (Cc Maciej).

The old bpf_load.c will auto attach the tracepoints... for and libbpf
you have to be explicit about it.

Can I ask you to also run a test with --stress-mode for
./xdp_redirect_cpu, to flush out any potential RCU race-conditions
(don't provide output, this is just a robustness test).
Björn Töpel Dec. 18, 2019, 12:18 p.m. UTC | #4
On Wed, 18 Dec 2019 at 13:04, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
>
> On Wed, 18 Dec 2019 12:39:53 +0100
> Björn Töpel <bjorn.topel@gmail.com> wrote:
>
> > On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> > >
> > > On Wed, 18 Dec 2019 11:53:52 +0100
> > > Björn Töpel <bjorn.topel@gmail.com> wrote:
> > >
> > > >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> > > >
> > > >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> > > >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> > > >   XDP-RX          20      7723038        0           0
> > > >   XDP-RX          total   7723038        0
> > > >   cpumap_kthread  total   0              0           0
> > > >   redirect_err    total   0              0
> > > >   xdp_exception   total   0              0
> > >
> > > Hmm... I'm missing some counters on the kthread side.
> > >
> >
> > Oh? Any ideas why? I just ran the upstream sample straight off.
>
> Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
> XDP samples to libbpf usage") (Cc Maciej).
>
> The old bpf_load.c will auto attach the tracepoints... for and libbpf
> you have to be explicit about it.
>
> Can I ask you to also run a test with --stress-mode for
> ./xdp_redirect_cpu, to flush out any potential RCU race-conditions
> (don't provide output, this is just a robustness test).
>

Sure! Other than that, does the command line above make sense? I'm
blasting UDP packets to core 20, and the idea was to re-route them to
22.


Björn
Björn Töpel Dec. 18, 2019, 12:32 p.m. UTC | #5
On Wed, 18 Dec 2019 at 13:18, Björn Töpel <bjorn.topel@gmail.com> wrote:
>
> On Wed, 18 Dec 2019 at 13:04, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> >
> > On Wed, 18 Dec 2019 12:39:53 +0100
> > Björn Töpel <bjorn.topel@gmail.com> wrote:
> >
> > > On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> > > >
> > > > On Wed, 18 Dec 2019 11:53:52 +0100
> > > > Björn Töpel <bjorn.topel@gmail.com> wrote:
> > > >
> > > > >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> > > > >
> > > > >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> > > > >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> > > > >   XDP-RX          20      7723038        0           0
> > > > >   XDP-RX          total   7723038        0
> > > > >   cpumap_kthread  total   0              0           0
> > > > >   redirect_err    total   0              0
> > > > >   xdp_exception   total   0              0
> > > >
> > > > Hmm... I'm missing some counters on the kthread side.
> > > >
> > >
> > > Oh? Any ideas why? I just ran the upstream sample straight off.
> >
> > Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
> > XDP samples to libbpf usage") (Cc Maciej).
> >
> > The old bpf_load.c will auto attach the tracepoints... for and libbpf
> > you have to be explicit about it.
> >
> > Can I ask you to also run a test with --stress-mode for
> > ./xdp_redirect_cpu, to flush out any potential RCU race-conditions
> > (don't provide output, this is just a robustness test).
> >
>
> Sure! Other than that, does the command line above make sense? I'm
> blasting UDP packets to core 20, and the idea was to re-route them to
> 22.
>

No, crash with --stress-mode/-x. (Still no tracepoint output.) And
bpf_redirect_map() is executed and the cpu_map thread is running. :-P
Jesper Dangaard Brouer Dec. 18, 2019, 12:40 p.m. UTC | #6
On Wed, 18 Dec 2019 13:18:10 +0100
Björn Töpel <bjorn.topel@gmail.com> wrote:

> On Wed, 18 Dec 2019 at 13:04, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> >
> > On Wed, 18 Dec 2019 12:39:53 +0100
> > Björn Töpel <bjorn.topel@gmail.com> wrote:
> >  
> > > On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:  
> > > >
> > > > On Wed, 18 Dec 2019 11:53:52 +0100
> > > > Björn Töpel <bjorn.topel@gmail.com> wrote:
> > > >  
> > > > >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> > > > >
> > > > >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> > > > >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> > > > >   XDP-RX          20      7723038        0           0
> > > > >   XDP-RX          total   7723038        0
> > > > >   cpumap_kthread  total   0              0           0
> > > > >   redirect_err    total   0              0
> > > > >   xdp_exception   total   0              0  
> > > >
> > > > Hmm... I'm missing some counters on the kthread side.
> > > >  
> > >
> > > Oh? Any ideas why? I just ran the upstream sample straight off.  
> >
> > Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
> > XDP samples to libbpf usage") (Cc Maciej).
> >
> > The old bpf_load.c will auto attach the tracepoints... for and libbpf
> > you have to be explicit about it.
> >
> > Can I ask you to also run a test with --stress-mode for
> > ./xdp_redirect_cpu, to flush out any potential RCU race-conditions
> > (don't provide output, this is just a robustness test).
> >  
> 
> Sure! Other than that, does the command line above make sense? I'm
> blasting UDP packets to core 20, and the idea was to re-route them to
> 22.

Yes, and I love that you are using CPUMAP xdp_redirect_cpu as a test.

Explaining what is doing on (so you can say if this is what you wanted
to test):

The "XDP-RX" number is the raw XDP redirect number, but the remote CPU,
where the network stack is started, cannot operate at 7.7Mpps.  Which the
lacking tracepoint numbers should have shown. You still can observe
results via nstat, e.g.:

 # nstat -n && sleep 1 && nstat

On the remote CPU 22, the SKB will be constructed, and likely dropped
due overloading network stack and due to not having an UDP listen port.

I sometimes use:
 # iptables -t raw -I PREROUTING -p udp --dport 9 -j DROP
To drop the UDP packets in a earlier and consistent stage.

The CPUMAP have carefully been designed to avoid that a "producer" can
be slowed down by memory operations done by the "consumer", this is
mostly achieved via ptr_ring and careful bulking (cache-lines).  As
your driver i40e doesn't have 'page_pool', then you are not affected by
the return channel.

Funny test/details: i40e uses a refcnt recycle scheme, based off the
size of the RX-ring, thus it is affected by a longer outstanding queue.
The CPUMAP have an intermediate queue, that will be full in this
overload setting.  Try to increase or decrease the parameter --qsize
(remember to place it as first argument), and see if this was the
limiting factor for your XDP-RX number.
Björn Töpel Dec. 18, 2019, 12:48 p.m. UTC | #7
On Wed, 18 Dec 2019 at 13:40, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
>
> On Wed, 18 Dec 2019 13:18:10 +0100
> Björn Töpel <bjorn.topel@gmail.com> wrote:
>
> > On Wed, 18 Dec 2019 at 13:04, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> > >
> > > On Wed, 18 Dec 2019 12:39:53 +0100
> > > Björn Töpel <bjorn.topel@gmail.com> wrote:
> > >
> > > > On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> > > > >
> > > > > On Wed, 18 Dec 2019 11:53:52 +0100
> > > > > Björn Töpel <bjorn.topel@gmail.com> wrote:
> > > > >
> > > > > >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> > > > > >
> > > > > >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> > > > > >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> > > > > >   XDP-RX          20      7723038        0           0
> > > > > >   XDP-RX          total   7723038        0
> > > > > >   cpumap_kthread  total   0              0           0
> > > > > >   redirect_err    total   0              0
> > > > > >   xdp_exception   total   0              0
> > > > >
> > > > > Hmm... I'm missing some counters on the kthread side.
> > > > >
> > > >
> > > > Oh? Any ideas why? I just ran the upstream sample straight off.
> > >
> > > Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
> > > XDP samples to libbpf usage") (Cc Maciej).
> > >
> > > The old bpf_load.c will auto attach the tracepoints... for and libbpf
> > > you have to be explicit about it.
> > >
> > > Can I ask you to also run a test with --stress-mode for
> > > ./xdp_redirect_cpu, to flush out any potential RCU race-conditions
> > > (don't provide output, this is just a robustness test).
> > >
> >
> > Sure! Other than that, does the command line above make sense? I'm
> > blasting UDP packets to core 20, and the idea was to re-route them to
> > 22.
>
> Yes, and I love that you are using CPUMAP xdp_redirect_cpu as a test.
>
> Explaining what is doing on (so you can say if this is what you wanted
> to test):
>

I wanted to see whether one could receive (Rx + bpf_redirect_map)
more with the change. I figured out that at least bpf_redirect_map was
correctly executed, and that the numbers went up. :-P

> The "XDP-RX" number is the raw XDP redirect number, but the remote CPU,
> where the network stack is started, cannot operate at 7.7Mpps.  Which the
> lacking tracepoint numbers should have shown. You still can observe
> results via nstat, e.g.:
>
>  # nstat -n && sleep 1 && nstat
>
> On the remote CPU 22, the SKB will be constructed, and likely dropped
> due overloading network stack and due to not having an UDP listen port.
>
> I sometimes use:
>  # iptables -t raw -I PREROUTING -p udp --dport 9 -j DROP
> To drop the UDP packets in a earlier and consistent stage.
>
> The CPUMAP have carefully been designed to avoid that a "producer" can
> be slowed down by memory operations done by the "consumer", this is
> mostly achieved via ptr_ring and careful bulking (cache-lines).  As
> your driver i40e doesn't have 'page_pool', then you are not affected by
> the return channel.
>
> Funny test/details: i40e uses a refcnt recycle scheme, based off the
> size of the RX-ring, thus it is affected by a longer outstanding queue.
> The CPUMAP have an intermediate queue, that will be full in this
> overload setting.  Try to increase or decrease the parameter --qsize
> (remember to place it as first argument), and see if this was the
> limiting factor for your XDP-RX number.
>

Thanks for the elaborate description!

(Maybe it's time for samples/bpf manpages? ;-))


Björn
Andrii Nakryiko Dec. 19, 2019, 12:39 a.m. UTC | #8
On Wed, Dec 18, 2019 at 4:04 AM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> On Wed, 18 Dec 2019 12:39:53 +0100
> Björn Töpel <bjorn.topel@gmail.com> wrote:
>
> > On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> > >
> > > On Wed, 18 Dec 2019 11:53:52 +0100
> > > Björn Töpel <bjorn.topel@gmail.com> wrote:
> > >
> > > >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> > > >
> > > >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> > > >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> > > >   XDP-RX          20      7723038        0           0
> > > >   XDP-RX          total   7723038        0
> > > >   cpumap_kthread  total   0              0           0
> > > >   redirect_err    total   0              0
> > > >   xdp_exception   total   0              0
> > >
> > > Hmm... I'm missing some counters on the kthread side.
> > >
> >
> > Oh? Any ideas why? I just ran the upstream sample straight off.
>
> Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
> XDP samples to libbpf usage") (Cc Maciej).
>
> The old bpf_load.c will auto attach the tracepoints... for and libbpf
> you have to be explicit about it.

... or you can use skeleton, which will auto-attach them as well,
provided BPF program's section names follow expected naming
convention. So it might be a good idea to try it out.

>
> Can I ask you to also run a test with --stress-mode for
> ./xdp_redirect_cpu, to flush out any potential RCU race-conditions
> (don't provide output, this is just a robustness test).
>
> --
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
>
Jesper Dangaard Brouer Dec. 19, 2019, 7:33 p.m. UTC | #9
On Wed, 18 Dec 2019 16:39:08 -0800
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:

> On Wed, Dec 18, 2019 at 4:04 AM Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> >
> > On Wed, 18 Dec 2019 12:39:53 +0100
> > Björn Töpel <bjorn.topel@gmail.com> wrote:
> >  
> > > On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:  
> > > >
> > > > On Wed, 18 Dec 2019 11:53:52 +0100
> > > > Björn Töpel <bjorn.topel@gmail.com> wrote:
> > > >  
> > > > >   $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> > > > >
> > > > >   Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> > > > >   XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> > > > >   XDP-RX          20      7723038        0           0
> > > > >   XDP-RX          total   7723038        0
> > > > >   cpumap_kthread  total   0              0           0
> > > > >   redirect_err    total   0              0
> > > > >   xdp_exception   total   0              0  
> > > >
> > > > Hmm... I'm missing some counters on the kthread side.
> > > >  
> > >
> > > Oh? Any ideas why? I just ran the upstream sample straight off.  
> >
> > Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
> > XDP samples to libbpf usage") (Cc Maciej).
> >
> > The old bpf_load.c will auto attach the tracepoints... for and libbpf
> > you have to be explicit about it.  
> 
> ... or you can use skeleton, which will auto-attach them as well,
> provided BPF program's section names follow expected naming
> convention. So it might be a good idea to try it out.

To Andrii, can you provide some more info on how to use this new
skeleton system of yours?  (Pointers to code examples?)
Daniel Borkmann Dec. 19, 2019, 8:08 p.m. UTC | #10
On 12/19/19 8:33 PM, Jesper Dangaard Brouer wrote:
> On Wed, 18 Dec 2019 16:39:08 -0800
> Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> 
>> On Wed, Dec 18, 2019 at 4:04 AM Jesper Dangaard Brouer
>> <brouer@redhat.com> wrote:
>>>
>>> On Wed, 18 Dec 2019 12:39:53 +0100
>>> Björn Töpel <bjorn.topel@gmail.com> wrote:
>>>   
>>>> On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
>>>>>
>>>>> On Wed, 18 Dec 2019 11:53:52 +0100
>>>>> Björn Töpel <bjorn.topel@gmail.com> wrote:
>>>>>   
>>>>>>    $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
>>>>>>
>>>>>>    Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
>>>>>>    XDP-cpumap      CPU:to  pps            drop-pps    extra-info
>>>>>>    XDP-RX          20      7723038        0           0
>>>>>>    XDP-RX          total   7723038        0
>>>>>>    cpumap_kthread  total   0              0           0
>>>>>>    redirect_err    total   0              0
>>>>>>    xdp_exception   total   0              0
>>>>>
>>>>> Hmm... I'm missing some counters on the kthread side.
>>>>>   
>>>>
>>>> Oh? Any ideas why? I just ran the upstream sample straight off.
>>>
>>> Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
>>> XDP samples to libbpf usage") (Cc Maciej).
>>>
>>> The old bpf_load.c will auto attach the tracepoints... for and libbpf
>>> you have to be explicit about it.
>>
>> ... or you can use skeleton, which will auto-attach them as well,
>> provided BPF program's section names follow expected naming
>> convention. So it might be a good idea to try it out.
> 
> To Andrii, can you provide some more info on how to use this new
> skeleton system of yours?  (Pointers to code examples?)

There's a man page ;-)

https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/tools/bpf/bpftool/Documentation/bpftool-gen.rst
Andrii Nakryiko Dec. 19, 2019, 10:56 p.m. UTC | #11
On Thu, Dec 19, 2019 at 12:08 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 12/19/19 8:33 PM, Jesper Dangaard Brouer wrote:
> > On Wed, 18 Dec 2019 16:39:08 -0800
> > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> >> On Wed, Dec 18, 2019 at 4:04 AM Jesper Dangaard Brouer
> >> <brouer@redhat.com> wrote:
> >>>
> >>> On Wed, 18 Dec 2019 12:39:53 +0100
> >>> Björn Töpel <bjorn.topel@gmail.com> wrote:
> >>>
> >>>> On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> >>>>>
> >>>>> On Wed, 18 Dec 2019 11:53:52 +0100
> >>>>> Björn Töpel <bjorn.topel@gmail.com> wrote:
> >>>>>
> >>>>>>    $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0
> >>>>>>
> >>>>>>    Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs
> >>>>>>    XDP-cpumap      CPU:to  pps            drop-pps    extra-info
> >>>>>>    XDP-RX          20      7723038        0           0
> >>>>>>    XDP-RX          total   7723038        0
> >>>>>>    cpumap_kthread  total   0              0           0
> >>>>>>    redirect_err    total   0              0
> >>>>>>    xdp_exception   total   0              0
> >>>>>
> >>>>> Hmm... I'm missing some counters on the kthread side.
> >>>>>
> >>>>
> >>>> Oh? Any ideas why? I just ran the upstream sample straight off.
> >>>
> >>> Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert
> >>> XDP samples to libbpf usage") (Cc Maciej).
> >>>
> >>> The old bpf_load.c will auto attach the tracepoints... for and libbpf
> >>> you have to be explicit about it.
> >>
> >> ... or you can use skeleton, which will auto-attach them as well,
> >> provided BPF program's section names follow expected naming
> >> convention. So it might be a good idea to try it out.
> >
> > To Andrii, can you provide some more info on how to use this new
> > skeleton system of yours?  (Pointers to code examples?)
>
> There's a man page ;-)
>
> https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/tools/bpf/bpftool/Documentation/bpftool-gen.rst

Also see runqslower patch set for end-to-end set up. There are a bunch
of selftests already using skeletons (test_attach_probe,
test_core_extern, test_skeleton).