Message ID | 20191226122917.431-1-qdkevin.kou@gmail.com |
---|---|
State | Accepted |
Delegated to: | David Miller |
Headers | show |
Series | [net-next] sctp: move trace_sctp_probe_path into sctp_outq_sack | expand |
On Thu, Dec 26, 2019 at 12:29:17PM +0000, Kevin Kou wrote: > The original patch bringed in the "SCTP ACK tracking trace event" > feature was committed at Dec.20, 2017, it replaced jprobe usage > with trace events, and bringed in two trace events, one is > TRACE_EVENT(sctp_probe), another one is TRACE_EVENT(sctp_probe_path). > The original patch intended to trigger the trace_sctp_probe_path in > TRACE_EVENT(sctp_probe) as below code, > > +TRACE_EVENT(sctp_probe, > + > + TP_PROTO(const struct sctp_endpoint *ep, > + const struct sctp_association *asoc, > + struct sctp_chunk *chunk), > + > + TP_ARGS(ep, asoc, chunk), > + > + TP_STRUCT__entry( > + __field(__u64, asoc) > + __field(__u32, mark) > + __field(__u16, bind_port) > + __field(__u16, peer_port) > + __field(__u32, pathmtu) > + __field(__u32, rwnd) > + __field(__u16, unack_data) > + ), > + > + TP_fast_assign( > + struct sk_buff *skb = chunk->skb; > + > + __entry->asoc = (unsigned long)asoc; > + __entry->mark = skb->mark; > + __entry->bind_port = ep->base.bind_addr.port; > + __entry->peer_port = asoc->peer.port; > + __entry->pathmtu = asoc->pathmtu; > + __entry->rwnd = asoc->peer.rwnd; > + __entry->unack_data = asoc->unack_data; > + > + if (trace_sctp_probe_path_enabled()) { > + struct sctp_transport *sp; > + > + list_for_each_entry(sp, &asoc->peer.transport_addr_list, > + transports) { > + trace_sctp_probe_path(sp, asoc); > + } > + } > + ), > > But I found it did not work when I did testing, and trace_sctp_probe_path > had no output, I finally found that there is trace buffer lock > operation(trace_event_buffer_reserve) in include/trace/trace_events.h: > > static notrace void \ > trace_event_raw_event_##call(void *__data, proto) \ > { \ > struct trace_event_file *trace_file = __data; \ > struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\ > struct trace_event_buffer fbuffer; \ > struct trace_event_raw_##call *entry; \ > int __data_size; \ > \ > if (trace_trigger_soft_disabled(trace_file)) \ > return; \ > \ > __data_size = trace_event_get_offsets_##call(&__data_offsets, args); \ > \ > entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ > sizeof(*entry) + __data_size); \ > \ > if (!entry) \ > return; \ > \ > tstruct \ > \ > { assign; } \ > \ > trace_event_buffer_commit(&fbuffer); \ > } > > The reason caused no output of trace_sctp_probe_path is that > trace_sctp_probe_path written in TP_fast_assign part of > TRACE_EVENT(sctp_probe), and it will be placed( { assign; } ) after the > trace_event_buffer_reserve() when compiler expands Macro, > > entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ > sizeof(*entry) + __data_size); \ > \ > if (!entry) \ > return; \ > \ > tstruct \ > \ > { assign; } \ > > so trace_sctp_probe_path finally can not acquire trace_event_buffer > and return no output, that is to say the nest of tracepoint entry function > is not allowed. The function call flow is: > > trace_sctp_probe() > -> trace_event_raw_event_sctp_probe() > -> lock buffer > -> trace_sctp_probe_path() > -> trace_event_raw_event_sctp_probe_path() --nested > -> buffer has been locked and return no output. > > This patch is to remove trace_sctp_probe_path from the TP_fast_assign > part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, > and trigger sctp_probe_path_trace in sctp_outq_sack. > > After this patch, you can enable both events individually, > # cd /sys/kernel/debug/tracing > # echo 1 > events/sctp/sctp_probe/enable > # echo 1 > events/sctp/sctp_probe_path/enable > > Or, you can enable all the events under sctp. > > # echo 1 > events/sctp/enable > > Signed-off-by: Kevin Kou <qdkevin.kou@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
From: Kevin Kou <qdkevin.kou@gmail.com> Date: Thu, 26 Dec 2019 12:29:17 +0000 > This patch is to remove trace_sctp_probe_path from the TP_fast_assign > part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, > and trigger sctp_probe_path_trace in sctp_outq_sack. ... Applied, but why did you remove the trace enabled check, just out of curiosity?
>From: Kevin Kou <qdkevin.kou@xxxxxxxxx> >Date: Thu, 26 Dec 2019 12:29:17 +0000 > >> This patch is to remove trace_sctp_probe_path from the TP_fast_assign >> part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, >> and trigger sctp_probe_path_trace in sctp_outq_sack. > ... > >Applied, but why did you remove the trace enabled check, just out of >curiosity? Actually, the check in trace_sctp_probe_path_enabled also done in trace_sctp_probe_path according to the Macro definition, both check if (static_key_false(&__tracepoint_##name.key)). include/linux/tracepoint.h #define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \ extern struct tracepoint __tracepoint_##name; \ static inline void trace_##name(proto) \ { \ if (static_key_false(&__tracepoint_##name.key)) \ __DO_TRACE(&__tracepoint_##name, \ TP_PROTO(data_proto), \ TP_ARGS(data_args), \ TP_CONDITION(cond), 0); \ if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \ rcu_read_lock_sched_notrace(); \ rcu_dereference_sched(__tracepoint_##name.funcs);\ rcu_read_unlock_sched_notrace(); \ } \ } static inline bool \ trace_##name##_enabled(void) \ { \ return static_key_false(&__tracepoint_##name.key); \ }
From: Kevin Kou <qdkevin.kou@gmail.com> Date: Fri, 27 Dec 2019 07:09:07 +0800 > > >>From: Kevin Kou <qdkevin.kou@xxxxxxxxx> >>Date: Thu, 26 Dec 2019 12:29:17 +0000 >> >>> This patch is to remove trace_sctp_probe_path from the TP_fast_assign >>> part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, >>> and trigger sctp_probe_path_trace in sctp_outq_sack. >> ... >> >>Applied, but why did you remove the trace enabled check, just out of >>curiosity? > > Actually, the check in trace_sctp_probe_path_enabled also done in > trace_sctp_probe_path according to the Macro definition, both check > if (static_key_false(&__tracepoint_##name.key)). Indeed, thanks for the explanation.
On Thu, Dec 26, 2019 at 03:38:35PM -0800, David Miller wrote: > From: Kevin Kou <qdkevin.kou@gmail.com> > Date: Fri, 27 Dec 2019 07:09:07 +0800 > > > > > > >>From: Kevin Kou <qdkevin.kou@xxxxxxxxx> > >>Date: Thu, 26 Dec 2019 12:29:17 +0000 > >> > >>> This patch is to remove trace_sctp_probe_path from the TP_fast_assign > >>> part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, > >>> and trigger sctp_probe_path_trace in sctp_outq_sack. > >> ... > >> > >>Applied, but why did you remove the trace enabled check, just out of > >>curiosity? > > > > Actually, the check in trace_sctp_probe_path_enabled also done in > > trace_sctp_probe_path according to the Macro definition, both check > > if (static_key_false(&__tracepoint_##name.key)). > > Indeed, thanks for the explanation. It was duplicated, yes, but it was also a small optimization: if (enabled) { for (X times) { if (enabled) { } } } So it wouldn't traverse the list if not needed. But X is usually 1 or 2 and this list is already traversed multiple times in this code path.
On 2019/12/27 10:28, Marcelo Ricardo Leitner wrote: > On Thu, Dec 26, 2019 at 03:38:35PM -0800, David Miller wrote: >> From: Kevin Kou <qdkevin.kou@gmail.com> >> Date: Fri, 27 Dec 2019 07:09:07 +0800 >> >>> >>> >>>> From: Kevin Kou <qdkevin.kou@xxxxxxxxx> >>>> Date: Thu, 26 Dec 2019 12:29:17 +0000 >>>> >>>>> This patch is to remove trace_sctp_probe_path from the TP_fast_assign >>>>> part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, >>>>> and trigger sctp_probe_path_trace in sctp_outq_sack. >>>> ... >>>> >>>> Applied, but why did you remove the trace enabled check, just out of >>>> curiosity? >>> >>> Actually, the check in trace_sctp_probe_path_enabled also done in >>> trace_sctp_probe_path according to the Macro definition, both check >>> if (static_key_false(&__tracepoint_##name.key)). >> >> Indeed, thanks for the explanation. > > It was duplicated, yes, but it was also a small optimization: > > if (enabled) { > for (X times) { > if (enabled) { > } > } > } > > So it wouldn't traverse the list if not needed. But X is usually 1 or > 2 and this list is already traversed multiple times in this code path. > Yes, It is a small optimization indeed. let me changed a little, do testing and resend another patch. Thanks for your comments, Marcelo.
diff --git a/include/trace/events/sctp.h b/include/trace/events/sctp.h index 7475c7b..d4aac34 100644 --- a/include/trace/events/sctp.h +++ b/include/trace/events/sctp.h @@ -75,15 +75,6 @@ __entry->pathmtu = asoc->pathmtu; __entry->rwnd = asoc->peer.rwnd; __entry->unack_data = asoc->unack_data; - - if (trace_sctp_probe_path_enabled()) { - struct sctp_transport *sp; - - list_for_each_entry(sp, &asoc->peer.transport_addr_list, - transports) { - trace_sctp_probe_path(sp, asoc); - } - } ), TP_printk("asoc=%#llx mark=%#x bind_port=%d peer_port=%d pathmtu=%d " diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c index a031d11..6b0b3ba 100644 --- a/net/sctp/outqueue.c +++ b/net/sctp/outqueue.c @@ -36,6 +36,7 @@ #include <net/sctp/sctp.h> #include <net/sctp/sm.h> #include <net/sctp/stream_sched.h> +#include <trace/events/sctp.h> /* Declare internal functions here. */ static int sctp_acked(struct sctp_sackhdr *sack, __u32 tsn); @@ -1238,6 +1239,11 @@ int sctp_outq_sack(struct sctp_outq *q, struct sctp_chunk *chunk) /* Grab the association's destination address list. */ transport_list = &asoc->peer.transport_addr_list; + /* SCTP path tracepoint for congestion control debugging. */ + list_for_each_entry(transport, transport_list, transports) { + trace_sctp_probe_path(transport, asoc); + } + sack_ctsn = ntohl(sack->cum_tsn_ack); gap_ack_blocks = ntohs(sack->num_gap_ack_blocks); asoc->stats.gapcnt += gap_ack_blocks;
The original patch bringed in the "SCTP ACK tracking trace event" feature was committed at Dec.20, 2017, it replaced jprobe usage with trace events, and bringed in two trace events, one is TRACE_EVENT(sctp_probe), another one is TRACE_EVENT(sctp_probe_path). The original patch intended to trigger the trace_sctp_probe_path in TRACE_EVENT(sctp_probe) as below code, +TRACE_EVENT(sctp_probe, + + TP_PROTO(const struct sctp_endpoint *ep, + const struct sctp_association *asoc, + struct sctp_chunk *chunk), + + TP_ARGS(ep, asoc, chunk), + + TP_STRUCT__entry( + __field(__u64, asoc) + __field(__u32, mark) + __field(__u16, bind_port) + __field(__u16, peer_port) + __field(__u32, pathmtu) + __field(__u32, rwnd) + __field(__u16, unack_data) + ), + + TP_fast_assign( + struct sk_buff *skb = chunk->skb; + + __entry->asoc = (unsigned long)asoc; + __entry->mark = skb->mark; + __entry->bind_port = ep->base.bind_addr.port; + __entry->peer_port = asoc->peer.port; + __entry->pathmtu = asoc->pathmtu; + __entry->rwnd = asoc->peer.rwnd; + __entry->unack_data = asoc->unack_data; + + if (trace_sctp_probe_path_enabled()) { + struct sctp_transport *sp; + + list_for_each_entry(sp, &asoc->peer.transport_addr_list, + transports) { + trace_sctp_probe_path(sp, asoc); + } + } + ), But I found it did not work when I did testing, and trace_sctp_probe_path had no output, I finally found that there is trace buffer lock operation(trace_event_buffer_reserve) in include/trace/trace_events.h: static notrace void \ trace_event_raw_event_##call(void *__data, proto) \ { \ struct trace_event_file *trace_file = __data; \ struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\ struct trace_event_buffer fbuffer; \ struct trace_event_raw_##call *entry; \ int __data_size; \ \ if (trace_trigger_soft_disabled(trace_file)) \ return; \ \ __data_size = trace_event_get_offsets_##call(&__data_offsets, args); \ \ entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ sizeof(*entry) + __data_size); \ \ if (!entry) \ return; \ \ tstruct \ \ { assign; } \ \ trace_event_buffer_commit(&fbuffer); \ } The reason caused no output of trace_sctp_probe_path is that trace_sctp_probe_path written in TP_fast_assign part of TRACE_EVENT(sctp_probe), and it will be placed( { assign; } ) after the trace_event_buffer_reserve() when compiler expands Macro, entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ sizeof(*entry) + __data_size); \ \ if (!entry) \ return; \ \ tstruct \ \ { assign; } \ so trace_sctp_probe_path finally can not acquire trace_event_buffer and return no output, that is to say the nest of tracepoint entry function is not allowed. The function call flow is: trace_sctp_probe() -> trace_event_raw_event_sctp_probe() -> lock buffer -> trace_sctp_probe_path() -> trace_event_raw_event_sctp_probe_path() --nested -> buffer has been locked and return no output. This patch is to remove trace_sctp_probe_path from the TP_fast_assign part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, and trigger sctp_probe_path_trace in sctp_outq_sack. After this patch, you can enable both events individually, # cd /sys/kernel/debug/tracing # echo 1 > events/sctp/sctp_probe/enable # echo 1 > events/sctp/sctp_probe_path/enable Or, you can enable all the events under sctp. # echo 1 > events/sctp/enable Signed-off-by: Kevin Kou <qdkevin.kou@gmail.com> --- include/trace/events/sctp.h | 9 --------- net/sctp/outqueue.c | 6 ++++++ 2 files changed, 6 insertions(+), 9 deletions(-)