diff mbox

[2/2,net-next] net: move qdisc ingress filtering code where it belongs

Message ID 1431277170-4618-3-git-send-email-pablo@netfilter.org
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Pablo Neira Ayuso May 10, 2015, 4:59 p.m. UTC
The qdisc ingress filtering code is embedded into the core most likely because
at that time we had no RCU in place to define a hook. This is semantically
wrong as this violates the most basic rules of encapsulation.

On top of that, this special qdisc does not enqueue anything at all, so we can
skip the enqueue indirection from qdisc_enqueue_root() which is doing things
that we don't need.

This reduces the pollution of the super-critical ingress path, where
most users don't need this as it has been stated many times before.
e.g. 24824a09 ("net: dynamic ingress_queue allocation").

As a result, this improves performance in the super-critical ingress:

Before:

Result: OK: 4767946(c4767946+d0) usec, 100000000 (60byte,0frags)
  20973388pps 10067Mb/sec (10067226240bps) errors: 100000000

After:

Result: OK: 4747078(c4747078+d0) usec, 100000000 (60byte,0frags)
  21065587pps 10111Mb/sec (10111481760bps) errors: 100000000

This is roughly 92199pps, ~0.42% more performance on my old box.

Using pktgen rx injection, perf shows I'm profiling the right thing.

    36.12%  kpktgend_0  [kernel.kallsyms]  [k] __netif_receive_skb_core
    18.46%  kpktgend_0  [kernel.kallsyms]  [k] atomic_dec_and_test
    15.87%  kpktgend_0  [kernel.kallsyms]  [k] deliver_ptype_list_skb
     5.04%  kpktgend_0  [pktgen]           [k] pktgen_thread_worker
     4.81%  kpktgend_0  [kernel.kallsyms]  [k] netif_receive_skb_internal
     4.11%  kpktgend_0  [kernel.kallsyms]  [k] kfree_skb
     3.89%  kpktgend_0  [kernel.kallsyms]  [k] ip_rcv
     3.44%  kpktgend_0  [kernel.kallsyms]  [k] __rcu_read_unlock
     2.89%  kpktgend_0  [kernel.kallsyms]  [k] netif_receive_skb_sk
     2.14%  kpktgend_0  [kernel.kallsyms]  [k] __netif_receive_skb
     2.14%  kpktgend_0  [kernel.kallsyms]  [k] __rcu_read_lock
     0.57%  kpktgend_0  [kernel.kallsyms]  [k] __local_bh_enable_ip

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 include/linux/netdevice.h |    3 +++
 include/linux/rtnetlink.h |    1 +
 net/core/dev.c            |   45 ++++++++++++---------------------------------
 net/sched/sch_ingress.c   |   43 +++++++++++++++++++++++++++++++++++++++----
 4 files changed, 55 insertions(+), 37 deletions(-)

Comments

Eric Dumazet May 10, 2015, 5:25 p.m. UTC | #1
On Sun, 2015-05-10 at 18:59 +0200, Pablo Neira Ayuso wrote:

> On top of that, this special qdisc does not enqueue anything at all, so we can
> skip the enqueue indirection from qdisc_enqueue_root() which is doing things
> that we don't need.

Note that we can get rid of qdisc_enqueue_root() completely, as
net/sched/sch_netem.c does not need need it either.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexei Starovoitov May 10, 2015, 5:45 p.m. UTC | #2
On 5/10/15 9:59 AM, Pablo Neira Ayuso wrote:
> The qdisc ingress filtering code is embedded into the core most likely because
> at that time we had no RCU in place to define a hook. This is semantically
> wrong as this violates the most basic rules of encapsulation.

Yet another attempt to sneak in 'qdisc_ingress_hook' to kill TC ?
Just add another hook for netfilter. Seriously. Enough of these
politics.

> On top of that, this special qdisc does not enqueue anything at all, so we can
> skip the enqueue indirection from qdisc_enqueue_root() which is doing things
> that we don't need.

Daniel's patch does that as well, but in much cleaner way.
Looks like you're stealing our ideas to hide overhead that you're
adding to ingress qdisc. Not cool.

> This reduces the pollution of the super-critical ingress path, where
> most users don't need this as it has been stated many times before.
> e.g. 24824a09 ("net: dynamic ingress_queue allocation").

Again, Daniel's patch accelerates super-critical ingress path even more.
Care to carefully read it first?

> As a result, this improves performance in the super-critical ingress:
>
> Before:
>
> Result: OK: 4767946(c4767946+d0) usec, 100000000 (60byte,0frags)
>    20973388pps 10067Mb/sec (10067226240bps) errors: 100000000
>
> After:
>
> Result: OK: 4747078(c4747078+d0) usec, 100000000 (60byte,0frags)
>    21065587pps 10111Mb/sec (10111481760bps) errors: 100000000
>
> This is roughly 92199pps, ~0.42% more performance on my old box.

funny, how the gain from removal of qdisc_enqueue_root() is offsetted
by added extra overhead. Compare your 0.42% with clean gains achieved by
Daniel's patch set.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pablo Neira Ayuso May 10, 2015, 5:59 p.m. UTC | #3
On Sun, May 10, 2015 at 10:45:42AM -0700, Alexei Starovoitov wrote:
> On 5/10/15 9:59 AM, Pablo Neira Ayuso wrote:
> >The qdisc ingress filtering code is embedded into the core most likely because
> >at that time we had no RCU in place to define a hook. This is semantically
> >wrong as this violates the most basic rules of encapsulation.
> 
> Yet another attempt to sneak in 'qdisc_ingress_hook' to kill TC ?
> Just add another hook for netfilter. Seriously. Enough of these
> politics.

Absolutely not. I will not kill TC because people like jamal likes it,
and that's more than an argument to me to keep it there.

I have to ask you to stop harassing me all over with non-technical
comments: "evil", "funny", ...

I'm getting quite enough of this, you stop that.

> Again, Daniel's patch accelerates super-critical ingress path even more.
> Care to carefully read it first?

No, Daniel is *not* benchmarking the netif_received_core() with no
filtering at all.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexei Starovoitov May 10, 2015, 6:05 p.m. UTC | #4
On 5/10/15 10:59 AM, Pablo Neira Ayuso wrote:
> On Sun, May 10, 2015 at 10:45:42AM -0700, Alexei Starovoitov wrote:
>> On 5/10/15 9:59 AM, Pablo Neira Ayuso wrote:
>>> The qdisc ingress filtering code is embedded into the core most likely because
>>> at that time we had no RCU in place to define a hook. This is semantically
>>> wrong as this violates the most basic rules of encapsulation.
>>
>> Yet another attempt to sneak in 'qdisc_ingress_hook' to kill TC ?
>> Just add another hook for netfilter. Seriously. Enough of these
>> politics.
>
> Absolutely not. I will not kill TC because people like jamal likes it,
> and that's more than an argument to me to keep it there.
>
> I have to ask you to stop harassing me all over with non-technical
> comments: "evil", "funny", ...

Please, I never called you 'evil'. Though we're arguing, it's ok,
because we both want the best for the kernel. We just not on the same
page yet.
'funny' also doesn't apply to you.
If you feel offended, I'm sorry. I didn't mean it at all.

> I'm getting quite enough of this, you stop that.

agree. let's articulate on exact technical means.
So, please, state clearly why you so much insisting of combining
existing tc and future netfilter hook into one that creates long
term head aches? What is wrong with two hooks?

>> Again, Daniel's patch accelerates super-critical ingress path even more.
>> Care to carefully read it first?
>
> No, Daniel is *not* benchmarking the netif_received_core() with no
> filtering at all.

sorry, not true. We did benchmark all combinations. Daniel posted
his, I'll send numbers from my box as well.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pablo Neira Ayuso May 10, 2015, 6:24 p.m. UTC | #5
On Sun, May 10, 2015 at 11:05:28AM -0700, Alexei Starovoitov wrote:
> On 5/10/15 10:59 AM, Pablo Neira Ayuso wrote:
> >No, Daniel is *not* benchmarking the netif_received_core() with no
> >filtering at all.
> 
> sorry, not true. We did benchmark all combinations. Daniel posted
> his, I'll send numbers from my box as well.

Daniel said:

"The extra indirection layers however, are not necessary for calling
into ingress qdisc. pktgen calling locally into netif_receive_skb()
with a dummy u32, single CPU result on a Supermicro X10SLM-F, Xeon
E3-1240: before ~21,1 Mpps, after patch ~22,9 Mpps."

That explicitly refers to u32, hence qdisc ingress, so he did *not*
post any number of the use case I'm indicating.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexei Starovoitov May 10, 2015, 6:47 p.m. UTC | #6
On 5/10/15 11:24 AM, Pablo Neira Ayuso wrote:
> On Sun, May 10, 2015 at 11:05:28AM -0700, Alexei Starovoitov wrote:
>> On 5/10/15 10:59 AM, Pablo Neira Ayuso wrote:
>>> No, Daniel is *not* benchmarking the netif_received_core() with no
>>> filtering at all.
>>
>> sorry, not true. We did benchmark all combinations. Daniel posted
>> his, I'll send numbers from my box as well.
>
> Daniel said:
>
> "The extra indirection layers however, are not necessary for calling
> into ingress qdisc. pktgen calling locally into netif_receive_skb()
> with a dummy u32, single CPU result on a Supermicro X10SLM-F, Xeon
> E3-1240: before ~21,1 Mpps, after patch ~22,9 Mpps."
>
> That explicitly refers to u32, hence qdisc ingress, so he did *not*
> post any number of the use case I'm indicating.

I think I'm starting to understand your concern.
You've read the patch in a way that it slows down netif_receive
_without_ ingress qdisc? Of course, that's not the case.

Here are the number from my box:
before:
no ingress - 37.6
ingress on other dev - 36.5
ingress on this dev - 28.8
ingress on this dev + u32 - 24.1

after Daniel's two patches:
no ingress - 37.6
ingress on other dev - 36.5
ingress on this dev - 36.5
ingress on this dev + u32 - 25.2

so when ingress qdisc is not used, the difference is zero.
When ingress qdisc is added to another device - difference is zero.
The last two numbers that we wanted to accelerate and we did.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pablo Neira Ayuso May 10, 2015, 7 p.m. UTC | #7
On Sun, May 10, 2015 at 11:47:01AM -0700, Alexei Starovoitov wrote:
> On 5/10/15 11:24 AM, Pablo Neira Ayuso wrote:
> >On Sun, May 10, 2015 at 11:05:28AM -0700, Alexei Starovoitov wrote:
> >>On 5/10/15 10:59 AM, Pablo Neira Ayuso wrote:
> >>>No, Daniel is *not* benchmarking the netif_received_core() with no
> >>>filtering at all.
> >>
> >>sorry, not true. We did benchmark all combinations. Daniel posted
> >>his, I'll send numbers from my box as well.
> >
> >Daniel said:
> >
> >"The extra indirection layers however, are not necessary for calling
> >into ingress qdisc. pktgen calling locally into netif_receive_skb()
> >with a dummy u32, single CPU result on a Supermicro X10SLM-F, Xeon
> >E3-1240: before ~21,1 Mpps, after patch ~22,9 Mpps."
> >
> >That explicitly refers to u32, hence qdisc ingress, so he did *not*
> >post any number of the use case I'm indicating.
> 
> I think I'm starting to understand your concern.
> You've read the patch in a way that it slows down netif_receive
> _without_ ingress qdisc?

No. What I said regarding my patchset I said:

"This patch improves performance of the super-critical ingress path by
moving the qdisc ingress code to sch_ingress, where this really
belongs."

The inlined code into the ingress core path seems to have an impact to
people that don't need this, even with the static key.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexei Starovoitov May 10, 2015, 7:06 p.m. UTC | #8
On 5/10/15 12:00 PM, Pablo Neira Ayuso wrote:
>
> The inlined code into the ingress core path seems to have an impact to
> people that don't need this, even with the static key.

two emails ago you've accused me of non-technical comments and
now I've posted real numbers that show no impact on users that don't
enable ingress and you still say 'seems to have an impact' ?!
I'm speechless.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pablo Neira Ayuso May 10, 2015, 7:20 p.m. UTC | #9
On Sun, May 10, 2015 at 12:06:34PM -0700, Alexei Starovoitov wrote:
> On 5/10/15 12:00 PM, Pablo Neira Ayuso wrote:
> >
> >The inlined code into the ingress core path seems to have an impact to
> >people that don't need this, even with the static key.
> 
> two emails ago you've accused me of non-technical comments and
> now I've posted real numbers that show no impact on users that don't
> enable ingress and you still say 'seems to have an impact' ?!
> I'm speechless.

No. On the danger of repeating myself: The existing approach that
inlines handle_ing() into __netif_receive_skb_core(), and your
approach since it's persists on that, has an impact in performance on
everyone in the earth.

It's quite clear from my patchset description.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexei Starovoitov May 10, 2015, 7:37 p.m. UTC | #10
On 5/10/15 12:20 PM, Pablo Neira Ayuso wrote:
> On Sun, May 10, 2015 at 12:06:34PM -0700, Alexei Starovoitov wrote:
>> On 5/10/15 12:00 PM, Pablo Neira Ayuso wrote:
>>>
>>> The inlined code into the ingress core path seems to have an impact to
>>> people that don't need this, even with the static key.
>>
>> two emails ago you've accused me of non-technical comments and
>> now I've posted real numbers that show no impact on users that don't
>> enable ingress and you still say 'seems to have an impact' ?!
>> I'm speechless.
>
> No. On the danger of repeating myself: The existing approach that
> inlines handle_ing() into __netif_receive_skb_core(), and your
> approach since it's persists on that, has an impact in performance on
> everyone in the earth.

Another non-technical 'guess' ?

baseline with Daniel's two patches:
    text	   data	    bss	    dec	    hex	filename
10605509	1885208	1400832	13891549	 d3f7dd	vmlinux

then with:
-static inline struct sk_buff *handle_ing(struct sk_buff *skb,
+static noinline struct sk_buff *handle_ing(struct sk_buff *skb,

    text	   data	    bss	    dec	    hex	filename
10605572	1885208	1400832	13891612	 d3f81c	vmlinux

so not inlining handle_ing() actually may have an impact on everyone,
because .text gets bigger. Though only marginally.

btw, after removing 'inline' keyword gcc still inlines it automatically
and looking at the above numbers gcc is doing the right call.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pablo Neira Ayuso May 10, 2015, 7:50 p.m. UTC | #11
On Sun, May 10, 2015 at 12:37:10PM -0700, Alexei Starovoitov wrote:
> On 5/10/15 12:20 PM, Pablo Neira Ayuso wrote:
[...]
> Another non-technical 'guess' ?
> 
> baseline with Daniel's two patches:
>    text	   data	    bss	    dec	    hex	filename
> 10605509	1885208	1400832	13891549	 d3f7dd	vmlinux
> 
> then with:
> -static inline struct sk_buff *handle_ing(struct sk_buff *skb,
> +static noinline struct sk_buff *handle_ing(struct sk_buff *skb,
> 
>    text	   data	    bss	    dec	    hex	filename
> 10605572	1885208	1400832	13891612	 d3f81c	vmlinux
> 
> so not inlining handle_ing() actually may have an impact on everyone,
> because .text gets bigger. Though only marginally.
> 
> btw, after removing 'inline' keyword gcc still inlines it automatically
> and looking at the above numbers gcc is doing the right call.

Please, stop this.

The numbers show that the existing approach and your approach results
in less performance for everyone that don't need to filter from
ingress. We have to move ingress to where it belongs.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann May 10, 2015, 8:40 p.m. UTC | #12
On 05/10/2015 08:47 PM, Alexei Starovoitov wrote:
...
> You've read the patch in a way that it slows down netif_receive
> _without_ ingress qdisc? Of course, that's not the case.
>
> Here are the number from my box:
> before:
> no ingress - 37.6
> ingress on other dev - 36.5
> ingress on this dev - 28.8
> ingress on this dev + u32 - 24.1
>
> after Daniel's two patches:
> no ingress - 37.6
> ingress on other dev - 36.5
> ingress on this dev - 36.5
> ingress on this dev + u32 - 25.2
>
> so when ingress qdisc is not used, the difference is zero.
> When ingress qdisc is added to another device - difference is zero.
> The last two numbers that we wanted to accelerate and we did.

+1, exactly!
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann May 10, 2015, 9:31 p.m. UTC | #13
On 05/10/2015 09:50 PM, Pablo Neira Ayuso wrote:
...
> The numbers show that the existing approach and your approach results
> in less performance for everyone that don't need to filter from
> ingress. We have to move ingress to where it belongs.

Your cleanup in patch 1 is okay, thanks for spotting it Pablo.

I agree with you on the qdisc_enqueue_root(), it's not needed, which I
removed in my set as well. Please note that my set doesn't introduce a
regression, it improves ingress performance however.

If there's no ingress user than that code path is simply *nop*'ed out.
If there's one ingress present on one device but not on others, it also
doesn't make anything slower to the current state. And you can also always
compile out CONFIG_NET_CLS_ACT (which we actually could make more fine
grained), if you really care.

A next possible step would be to get rid of the ingress netdev queue so
we can also reduce memory overhead. The only thing that is needed is
the classifier list, which is then being invoked, we all have stated
that many times previously.

My other concern is, if we export qdisc_ingress_hook function pointer,
out of tree modules can simply do rcu_assign_pointer(qdisc_ingress_hook,
my_own_handler) to transparently implement their own hook, hm.

Best,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann May 10, 2015, 9:44 p.m. UTC | #14
On 05/10/2015 11:31 PM, Daniel Borkmann wrote:
> On 05/10/2015 09:50 PM, Pablo Neira Ayuso wrote:
> ...
>> The numbers show that the existing approach and your approach results
>> in less performance for everyone that don't need to filter from
>> ingress. We have to move ingress to where it belongs.
>
> Your cleanup in patch 1 is okay, thanks for spotting it Pablo.
>
> I agree with you on the qdisc_enqueue_root(), it's not needed, which I
> removed in my set as well. Please note that my set doesn't introduce a
> regression, it improves ingress performance however.
>
> If there's no ingress user than that code path is simply *nop*'ed out.
> If there's one ingress present on one device but not on others, it also
> doesn't make anything slower to the current state. And you can also always
> compile out CONFIG_NET_CLS_ACT (which we actually could make more fine
> grained), if you really care.

But I am still wondering, does your machine have static_key support?
If nothing is enabled, the code runs through a straight-line code path,
it's a nop that is there.

> A next possible step would be to get rid of the ingress netdev queue so
> we can also reduce memory overhead. The only thing that is needed is
> the classifier list, which is then being invoked, we all have stated
> that many times previously.
>
> My other concern is, if we export qdisc_ingress_hook function pointer,
> out of tree modules can simply do rcu_assign_pointer(qdisc_ingress_hook,
> my_own_handler) to transparently implement their own hook, hm.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pablo Neira Ayuso May 10, 2015, 11:43 p.m. UTC | #15
On Sun, May 10, 2015 at 11:44:15PM +0200, Daniel Borkmann wrote:
> On 05/10/2015 11:31 PM, Daniel Borkmann wrote:
> >On 05/10/2015 09:50 PM, Pablo Neira Ayuso wrote:
> >...
> >>The numbers show that the existing approach and your approach results
> >>in less performance for everyone that don't need to filter from
> >>ingress. We have to move ingress to where it belongs.
> >
> >Your cleanup in patch 1 is okay, thanks for spotting it Pablo.
> >
> >I agree with you on the qdisc_enqueue_root(), it's not needed, which I
> >removed in my set as well. Please note that my set doesn't introduce a
> >regression, it improves ingress performance however.
> >
> >If there's no ingress user than that code path is simply *nop*'ed out.
> >If there's one ingress present on one device but not on others, it also
> >doesn't make anything slower to the current state. And you can also always
> >compile out CONFIG_NET_CLS_ACT (which we actually could make more fine
> >grained), if you really care.
> 
> But I am still wondering, does your machine have static_key support?

Yes:

CONFIG_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y

$ scripts/gcc-goto.sh gcc
y

> If nothing is enabled, the code runs through a straight-line code path,
> it's a nop that is there.

The noop is patched to an unconditional branch to skip that code, but
the code is still there in that path, even if it's dormant.

What the numbers show is rather simple: The more code is in the path,
the less performance you get, and the qdisc ingress specific code
embedded there is reducing performance for people that are not using
qdisc ingress, hence it should go where it belongs. The static key
cannot save you from that.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexei Starovoitov May 11, 2015, 5:57 a.m. UTC | #16
On 5/10/15 4:43 PM, Pablo Neira Ayuso wrote:
>
> The noop is patched to an unconditional branch to skip that code, but
> the code is still there in that path, even if it's dormant.
>
> What the numbers show is rather simple: The more code is in the path,
> the less performance you get, and the qdisc ingress specific code
> embedded there is reducing performance for people that are not using
> qdisc ingress, hence it should go where it belongs. The static key
> cannot save you from that.

hmm, did I miss these numbers ?

My numbers are showing the opposite. There is no degradation whatsoever.
To recap, here are the numbers from my box:
no ingress - 37.6
ingress on other dev - 36.5
ingress on this dev - 28.8
ingress on this dev + u32 - 24.1

after Daniel's two patches:
no ingress - 37.6
ingress on other dev - 36.5
ingress on this dev - 36.5
ingress on this dev + u32 - 25.2

Explanation of the first lines:
'no ingress' means pure netif_receive_skb with drop in ip_rcv.
This is the case when ingress qdisc is not attached to any of the
devices.
Here static_key is off, so:
         if (static_key_false(&ingress_needed)) {
                 ... never reaches here ...
                 skb = handle_ing(skb, &pt_prev, &ret, orig_dev);

So the code path is the same and numbers before and after are
proving it is the case.

'ingress on other dev' means that ingress qdisc is attached to
some other device. Meaning that static_key is now on and
handle_ing() after two patches does:
cl = rcu_dereference_bh(skb->dev->ingress_cl_list);
if (!cl)
   return skb; ... returns here ...

Prior to two Daniel's patches handle_ing() does:
  rxq = rcu_dereference(skb->dev->ingress_queue);
  if (!rxq || rcu_access_pointer(rxq->qdisc) == &noop_qdisc)
     return skb; ... returns here ...

so the number of instructions is the same for 'ingress on other dev'
case too.
Not surprisingly before and after numbers for 'no ingress' and
'ingress on other dev' are exactly the same.

It sounds like you're saying that code that not even being executed
is somehow affecting the speed? How is that possible ?
What kind of benchmark do you run? I really want to get to the bottom
of it. We cannot use 'drive-by reviewing' and 'gut feel' to make
decisions. If you think my numbers are wrong, please rerun them.
I'm using pktgen with 'xmit_mode netif_receive'. If there is some
other benchmark please bring it on. Everyone will benefit.
If you think pktgen as a test is not representative of real numbers,
sure, that's a different discussion. Let's talk what is the right
test to use. Just claiming that inline of a function into
netif_receive_skb is hurting performance is bogus. Inlining hurts
when size of executed code increases beyond I-cache size. That's
clearly not the case here. Compilers moves inlined handle_ing()
into cold path of netif_receive_skb. So for anyone who doesn't
enable ingress qdisc there is no difference, since that code doesn't
even get into I-cache.
Here is snippet from net/core/dev.s:
	/* below is line 3696: if (static_key_false(&ingress_needed)) */
         1:.byte 0x0f,0x1f,0x44,0x00,0
         .pushsection __jump_table,  "aw"
          .balign 8
          .quad 1b, .L1756, ingress_needed       #,
         .popsection
	/* below is line 3702: skb->tc_verd = 0; */
	movw    $0, 150(%rbx)   #, skb_310->tc_verd

As you can see the inlined handle_ing() is not in fall through path
and it's placed by gcc at the end of netif_receive_skb_core at the label
L1756. So I cannot possible see how it can affect performance.
Even if we make handle_ing() twice as big, there is still won't be
any difference for users that don't use ingress qdisc.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jamal Hadi Salim May 11, 2015, 12:49 p.m. UTC | #17
we need to agree on what the benchmark is. The metrics below are
fine, whoever is doing measurements (expecting different kernels
and hardware to behave differently, so a kernel config will help):

1)no ingress (if drop is in ip_rcv, does it look at some header?)
2)ingress on other dev (this should mean still #1 drop applies)
3)ingress on this dev (#1 drop in effect)
4)ingress on this dev (#1 drop in effect)
5)ingress on this dev + drop at u32 classifier/action.

Variations of above:
A) Current kernel
B) ingress static key introduced
C)Pablo's vs Daniel's settings

The one thing i see in Pablo's comments is extra code will slow
down things. I see Alexei refuting it below since that code is
not being executed.
So some variation with extra code to see what happens.

It is hard, but please stay calm and carry on - we all want
whats best.

cheers,
jamal

On 05/11/15 01:57, Alexei Starovoitov wrote:
> On 5/10/15 4:43 PM, Pablo Neira Ayuso wrote:
>>
>> The noop is patched to an unconditional branch to skip that code, but
>> the code is still there in that path, even if it's dormant.
>>
>> What the numbers show is rather simple: The more code is in the path,
>> the less performance you get, and the qdisc ingress specific code
>> embedded there is reducing performance for people that are not using
>> qdisc ingress, hence it should go where it belongs. The static key
>> cannot save you from that.
>
> hmm, did I miss these numbers ?
>
> My numbers are showing the opposite. There is no degradation whatsoever.
> To recap, here are the numbers from my box:
> no ingress - 37.6
> ingress on other dev - 36.5
> ingress on this dev - 28.8
> ingress on this dev + u32 - 24.1
>
> after Daniel's two patches:
> no ingress - 37.6
> ingress on other dev - 36.5
> ingress on this dev - 36.5
> ingress on this dev + u32 - 25.2
>
> Explanation of the first lines:
> 'no ingress' means pure netif_receive_skb with drop in ip_rcv.
> This is the case when ingress qdisc is not attached to any of the
> devices.
> Here static_key is off, so:
>          if (static_key_false(&ingress_needed)) {
>                  ... never reaches here ...
>                  skb = handle_ing(skb, &pt_prev, &ret, orig_dev);
>
> So the code path is the same and numbers before and after are
> proving it is the case.
>
> 'ingress on other dev' means that ingress qdisc is attached to
> some other device. Meaning that static_key is now on and
> handle_ing() after two patches does:
> cl = rcu_dereference_bh(skb->dev->ingress_cl_list);
> if (!cl)
>    return skb; ... returns here ...
>
> Prior to two Daniel's patches handle_ing() does:
>   rxq = rcu_dereference(skb->dev->ingress_queue);
>   if (!rxq || rcu_access_pointer(rxq->qdisc) == &noop_qdisc)
>      return skb; ... returns here ...
>
> so the number of instructions is the same for 'ingress on other dev'
> case too.
> Not surprisingly before and after numbers for 'no ingress' and
> 'ingress on other dev' are exactly the same.
>
> It sounds like you're saying that code that not even being executed
> is somehow affecting the speed? How is that possible ?
> What kind of benchmark do you run? I really want to get to the bottom
> of it. We cannot use 'drive-by reviewing' and 'gut feel' to make
> decisions. If you think my numbers are wrong, please rerun them.
> I'm using pktgen with 'xmit_mode netif_receive'. If there is some
> other benchmark please bring it on. Everyone will benefit.
> If you think pktgen as a test is not representative of real numbers,
> sure, that's a different discussion. Let's talk what is the right
> test to use. Just claiming that inline of a function into
> netif_receive_skb is hurting performance is bogus. Inlining hurts
> when size of executed code increases beyond I-cache size. That's
> clearly not the case here. Compilers moves inlined handle_ing()
> into cold path of netif_receive_skb. So for anyone who doesn't
> enable ingress qdisc there is no difference, since that code doesn't
> even get into I-cache.
> Here is snippet from net/core/dev.s:
>      /* below is line 3696: if (static_key_false(&ingress_needed)) */
>          1:.byte 0x0f,0x1f,0x44,0x00,0
>          .pushsection __jump_table,  "aw"
>           .balign 8
>           .quad 1b, .L1756, ingress_needed       #,
>          .popsection
>      /* below is line 3702: skb->tc_verd = 0; */
>      movw    $0, 150(%rbx)   #, skb_310->tc_verd
>
> As you can see the inlined handle_ing() is not in fall through path
> and it's placed by gcc at the end of netif_receive_skb_core at the label
> L1756. So I cannot possible see how it can affect performance.
> Even if we make handle_ing() twice as big, there is still won't be
> any difference for users that don't use ingress qdisc.
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 1899c74..85288b5 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1299,6 +1299,9 @@  enum netdev_priv_flags {
 #define IFF_IPVLAN_MASTER		IFF_IPVLAN_MASTER
 #define IFF_IPVLAN_SLAVE		IFF_IPVLAN_SLAVE
 
+typedef struct sk_buff *qdisc_ingress_hook_t(struct sk_buff *skb);
+extern qdisc_ingress_hook_t __rcu *qdisc_ingress_hook;
+
 /**
  *	struct net_device - The DEVICE structure.
  *		Actually, this whole structure is a big mistake.  It mixes I/O
diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
index bd29ab4..7c204f2 100644
--- a/include/linux/rtnetlink.h
+++ b/include/linux/rtnetlink.h
@@ -82,6 +82,7 @@  struct netdev_queue *dev_ingress_queue_create(struct net_device *dev);
 #ifdef CONFIG_NET_CLS_ACT
 void net_inc_ingress_queue(void);
 void net_dec_ingress_queue(void);
+int net_ingress_queue_count(void);
 #endif
 
 extern void rtnetlink_init(void);
diff --git a/net/core/dev.c b/net/core/dev.c
index 862875e..14a07ec 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1644,6 +1644,12 @@  void net_dec_ingress_queue(void)
 	static_key_slow_dec(&ingress_needed);
 }
 EXPORT_SYMBOL_GPL(net_dec_ingress_queue);
+
+int net_ingress_queue_count(void)
+{
+	return static_key_count(&ingress_needed);
+}
+EXPORT_SYMBOL_GPL(net_ingress_queue_count);
 #endif
 
 static struct static_key netstamp_needed __read_mostly;
@@ -3521,37 +3527,17 @@  EXPORT_SYMBOL_GPL(br_fdb_test_addr_hook);
 #endif
 
 #ifdef CONFIG_NET_CLS_ACT
-/* TODO: Maybe we should just force sch_ingress to be compiled in
- * when CONFIG_NET_CLS_ACT is? otherwise some useless instructions
- * a compare and 2 stores extra right now if we dont have it on
- * but have CONFIG_NET_CLS_ACT
- * NOTE: This doesn't stop any functionality; if you dont have
- * the ingress scheduler, you just can't add policies on ingress.
- *
- */
-static int ing_filter(struct sk_buff *skb, struct netdev_queue *rxq)
-{
-	int result = TC_ACT_OK;
-	struct Qdisc *q;
-
-	skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);
-
-	q = rcu_dereference(rxq->qdisc);
-	if (q != &noop_qdisc) {
-		if (likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
-			result = qdisc_enqueue_root(skb, q);
-	}
-
-	return result;
-}
+qdisc_ingress_hook_t __rcu *qdisc_ingress_hook __read_mostly;
+EXPORT_SYMBOL_GPL(qdisc_ingress_hook);
 
 static inline struct sk_buff *handle_ing(struct sk_buff *skb,
 					 struct packet_type **pt_prev,
 					 int *ret, struct net_device *orig_dev)
 {
-	struct netdev_queue *rxq = rcu_dereference(skb->dev->ingress_queue);
+	qdisc_ingress_hook_t *ingress_hook;
 
-	if (!rxq || rcu_access_pointer(rxq->qdisc) == &noop_qdisc)
+	ingress_hook = rcu_dereference(qdisc_ingress_hook);
+	if (ingress_hook == NULL)
 		return skb;
 
 	if (*pt_prev) {
@@ -3559,14 +3545,7 @@  static inline struct sk_buff *handle_ing(struct sk_buff *skb,
 		*pt_prev = NULL;
 	}
 
-	switch (ing_filter(skb, rxq)) {
-	case TC_ACT_SHOT:
-	case TC_ACT_STOLEN:
-		kfree_skb(skb);
-		return NULL;
-	}
-
-	return skb;
+	return ingress_hook(skb);
 }
 #endif
 
diff --git a/net/sched/sch_ingress.c b/net/sched/sch_ingress.c
index a89cc32..0c20f89 100644
--- a/net/sched/sch_ingress.c
+++ b/net/sched/sch_ingress.c
@@ -54,9 +54,9 @@  static struct tcf_proto __rcu **ingress_find_tcf(struct Qdisc *sch,
 	return &p->filter_list;
 }
 
-/* --------------------------- Qdisc operations ---------------------------- */
+/* ------------------------------------------------------------- */
 
-static int ingress_enqueue(struct sk_buff *skb, struct Qdisc *sch)
+static int ingress_filter(struct sk_buff *skb, struct Qdisc *sch)
 {
 	struct ingress_qdisc_data *p = qdisc_priv(sch);
 	struct tcf_result res;
@@ -86,10 +86,42 @@  static int ingress_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 	return result;
 }
 
-/* ------------------------------------------------------------- */
+static int ing_filter(struct sk_buff *skb, struct netdev_queue *rxq)
+{
+	int result = TC_ACT_OK;
+	struct Qdisc *q = rcu_dereference(rxq->qdisc);
+
+	skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);
+
+	if (q != &noop_qdisc) {
+		if (likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
+			result = ingress_filter(skb, q);
+	}
+
+	return result;
+}
+
+static struct sk_buff *qdisc_ingress_filter(struct sk_buff *skb)
+{
+	struct netdev_queue *rxq = rcu_dereference(skb->dev->ingress_queue);
+
+	if (!rxq || rcu_access_pointer(rxq->qdisc) == &noop_qdisc)
+		return skb;
+
+	switch (ing_filter(skb, rxq)) {
+	case TC_ACT_SHOT:
+	case TC_ACT_STOLEN:
+		kfree_skb(skb);
+		return NULL;
+	}
+
+	return skb;
+}
 
 static int ingress_init(struct Qdisc *sch, struct nlattr *opt)
 {
+	if (net_ingress_queue_count() == 0)
+		rcu_assign_pointer(qdisc_ingress_hook, qdisc_ingress_filter);
 	net_inc_ingress_queue();
 	sch->flags |= TCQ_F_CPUSTATS;
 
@@ -102,6 +134,10 @@  static void ingress_destroy(struct Qdisc *sch)
 
 	tcf_destroy_chain(&p->filter_list);
 	net_dec_ingress_queue();
+	if (net_ingress_queue_count() == 0) {
+		rcu_assign_pointer(qdisc_ingress_hook, NULL);
+		synchronize_rcu();
+	}
 }
 
 static int ingress_dump(struct Qdisc *sch, struct sk_buff *skb)
@@ -132,7 +168,6 @@  static struct Qdisc_ops ingress_qdisc_ops __read_mostly = {
 	.cl_ops		=	&ingress_class_ops,
 	.id		=	"ingress",
 	.priv_size	=	sizeof(struct ingress_qdisc_data),
-	.enqueue	=	ingress_enqueue,
 	.init		=	ingress_init,
 	.destroy	=	ingress_destroy,
 	.dump		=	ingress_dump,