diff mbox

[net,2/2] conntrack: enable to tune gc parameters

Message ID 20161010140424.GB21057@breakpoint.cc
State Awaiting Upstream, archived
Delegated to: David Miller
Headers show

Commit Message

Florian Westphal Oct. 10, 2016, 2:04 p.m. UTC
Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
> After commit b87a2f9199ea ("netfilter: conntrack: add gc worker to remove
> timed-out entries"), netlink conntrack deletion events may be sent with a
> huge delay. It could be interesting to let the user tweak gc parameters
> depending on its use case.

Hmm, care to elaborate?

I am not against doing this but I'd like to hear/read your use case.

The expectation is that in almot all cases eviction will happen from
packet path.  The gc worker is jusdt there for case where a busy system
goes idle.

> +nf_conntrack_gc_max_evicts - INTEGER
> +	The maximum number of entries to be evicted during a run of gc.
> +	This sysctl is only writeable in the initial net namespace.

Hmmm, do you have any advice on sizing this one?

I think a better change might be (instead of adding htis knob) to
resched the gc worker for immediate re-executaion in case the entire
"budget" was used.  What do you think?

Comments

Nicolas Dichtel Oct. 10, 2016, 3:24 p.m. UTC | #1
Le 10/10/2016 à 16:04, Florian Westphal a écrit :
> Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
>> After commit b87a2f9199ea ("netfilter: conntrack: add gc worker to remove
>> timed-out entries"), netlink conntrack deletion events may be sent with a
>> huge delay. It could be interesting to let the user tweak gc parameters
>> depending on its use case.
> 
> Hmm, care to elaborate?
> 
> I am not against doing this but I'd like to hear/read your use case.
> 
> The expectation is that in almot all cases eviction will happen from
> packet path.  The gc worker is jusdt there for case where a busy system
> goes idle.
It was precisely that case. After a period of activity, the event is sent a long
time after the timeout. If the router does not manage a lot of flows, why not
trying to parse more entries instead of the default 1/64 of the table?
In fact, I don't understand why using GC_MAX_BUCKETS_DIV instead of using always
GC_MAX_BUCKETS whatever the size of the table is.

> 
>> +nf_conntrack_gc_max_evicts - INTEGER
>> +	The maximum number of entries to be evicted during a run of gc.
>> +	This sysctl is only writeable in the initial net namespace.
> 
> Hmmm, do you have any advice on sizing this one?
In fact, no ;-)
I really hesitate to expose the four values or just a subset. My goal was also
to get feedback. I can remove this one.

> 
> I think a better change might be (instead of adding htis knob) to
> resched the gc worker for immediate re-executaion in case the entire
> "budget" was used.  What do you think?
Even if it's not directly related to my problem, I think it's a good idea.

> 
> 
> diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
> --- a/net/netfilter/nf_conntrack_core.c
> +++ b/net/netfilter/nf_conntrack_core.c
> @@ -983,7 +983,7 @@ static void gc_worker(struct work_struct *work)
>                 return;
>  
>         ratio = scanned ? expired_count * 100 / scanned : 0;
> -       if (ratio >= 90)
> +       if (ratio >= 90 || expired_count == GC_MAX_EVICTS)
>                 next_run = 0;
>
Florian Westphal Oct. 13, 2016, 8:43 p.m. UTC | #2
Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
> Le 10/10/2016 à 16:04, Florian Westphal a écrit :
> > Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
> >> After commit b87a2f9199ea ("netfilter: conntrack: add gc worker to remove
> >> timed-out entries"), netlink conntrack deletion events may be sent with a
> >> huge delay. It could be interesting to let the user tweak gc parameters
> >> depending on its use case.
> > 
> > Hmm, care to elaborate?
> > 
> > I am not against doing this but I'd like to hear/read your use case.
> > 
> > The expectation is that in almot all cases eviction will happen from
> > packet path.  The gc worker is jusdt there for case where a busy system
> > goes idle.
> It was precisely that case. After a period of activity, the event is sent a long
> time after the timeout. If the router does not manage a lot of flows, why not
> trying to parse more entries instead of the default 1/64 of the table?
> In fact, I don't understand why using GC_MAX_BUCKETS_DIV instead of using always
> GC_MAX_BUCKETS whatever the size of the table is.

I wanted to make sure that we have a known upper bound on the number of
buckets we process so that we do not block other pending kworker items
for too long.

(Or cause too many useless scans)

Another idea worth trying might be to get rid of the max cap and
instead break early in case too many jiffies expired.

I don't want to add sysctl knobs for this unless absolutely needed; its already
possible to 'force' eviction cycle by running 'conntrack -L'.
Nicolas Dichtel Oct. 14, 2016, 10:12 a.m. UTC | #3
Le 13/10/2016 à 22:43, Florian Westphal a écrit :
> Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
>> Le 10/10/2016 à 16:04, Florian Westphal a écrit :
>>> Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
>>>> After commit b87a2f9199ea ("netfilter: conntrack: add gc worker to remove
>>>> timed-out entries"), netlink conntrack deletion events may be sent with a
>>>> huge delay. It could be interesting to let the user tweak gc parameters
>>>> depending on its use case.
>>>
>>> Hmm, care to elaborate?
>>>
>>> I am not against doing this but I'd like to hear/read your use case.
>>>
>>> The expectation is that in almot all cases eviction will happen from
>>> packet path.  The gc worker is jusdt there for case where a busy system
>>> goes idle.
>> It was precisely that case. After a period of activity, the event is sent a long
>> time after the timeout. If the router does not manage a lot of flows, why not
>> trying to parse more entries instead of the default 1/64 of the table?
>> In fact, I don't understand why using GC_MAX_BUCKETS_DIV instead of using always
>> GC_MAX_BUCKETS whatever the size of the table is.
> 
> I wanted to make sure that we have a known upper bound on the number of
> buckets we process so that we do not block other pending kworker items
> for too long.
I don't understand. GC_MAX_BUCKETS is the upper bound and I agree that it is
needed. But why GC_MAX_BUCKETS_DIV (ie 1/64)?
In other words, why this line:
goal = min(nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV, GC_MAX_BUCKETS);
instead of:
goal = GC_MAX_BUCKETS;
?

> 
> (Or cause too many useless scans)
> 
> Another idea worth trying might be to get rid of the max cap and
> instead break early in case too many jiffies expired.
> 
> I don't want to add sysctl knobs for this unless absolutely needed; its already
> possible to 'force' eviction cycle by running 'conntrack -L'.
> 
Sure, but this is not a "real" solution, just a workaround.
We need to find a way to deliver conntrack deletion events in a reasonable
delay, whatever the traffic on the machine is.
Florian Westphal Oct. 14, 2016, 10:37 a.m. UTC | #4
Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
> Le 13/10/2016 à 22:43, Florian Westphal a écrit :
> > Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
> >> Le 10/10/2016 à 16:04, Florian Westphal a écrit :
> >>> Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
> >>>> After commit b87a2f9199ea ("netfilter: conntrack: add gc worker to remove
> >>>> timed-out entries"), netlink conntrack deletion events may be sent with a
> >>>> huge delay. It could be interesting to let the user tweak gc parameters
> >>>> depending on its use case.
> >>>
> >>> Hmm, care to elaborate?
> >>>
> >>> I am not against doing this but I'd like to hear/read your use case.
> >>>
> >>> The expectation is that in almot all cases eviction will happen from
> >>> packet path.  The gc worker is jusdt there for case where a busy system
> >>> goes idle.
> >> It was precisely that case. After a period of activity, the event is sent a long
> >> time after the timeout. If the router does not manage a lot of flows, why not
> >> trying to parse more entries instead of the default 1/64 of the table?
> >> In fact, I don't understand why using GC_MAX_BUCKETS_DIV instead of using always
> >> GC_MAX_BUCKETS whatever the size of the table is.
> > 
> > I wanted to make sure that we have a known upper bound on the number of
> > buckets we process so that we do not block other pending kworker items
> > for too long.
> I don't understand. GC_MAX_BUCKETS is the upper bound and I agree that it is
> needed. But why GC_MAX_BUCKETS_DIV (ie 1/64)?
> In other words, why this line:
> goal = min(nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV, GC_MAX_BUCKETS);
> instead of:
> goal = GC_MAX_BUCKETS;

Sure, we can do that.  But why is a fixed size better than a fraction?

E.g. with 8k buckets and simple goal = GC_MAX_BUCKETS we scan entire
table on every run, currently we only scan 128.

I wanted to keep too many destroy notifications from firing at once
but maybe i was too paranoid...

> > (Or cause too many useless scans)
> > 
> > Another idea worth trying might be to get rid of the max cap and
> > instead break early in case too many jiffies expired.
> > 
> > I don't want to add sysctl knobs for this unless absolutely needed; its already
> > possible to 'force' eviction cycle by running 'conntrack -L'.
> > 
> Sure, but this is not a "real" solution, just a workaround.
> We need to find a way to deliver conntrack deletion events in a reasonable
> delay, whatever the traffic on the machine is.

Agree, but that depends on what 'reasonable' means and what kind of
uneeded cpu churn we're willing to add.

We can add a sysctl for this but we should use a low default to not do
too much unneeded work.

So what about your original patch, but only add

nf_conntrack_gc_interval

(and also add instant-resched in case entire budget was consumed)?
Pablo Neira Ayuso Oct. 14, 2016, 10:53 a.m. UTC | #5
On Fri, Oct 14, 2016 at 12:37:26PM +0200, Florian Westphal wrote:
> Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:
> > Le 13/10/2016 à 22:43, Florian Westphal a écrit :
[...]
> > > (Or cause too many useless scans)
> > > 
> > > Another idea worth trying might be to get rid of the max cap and
> > > instead break early in case too many jiffies expired.
> > > 
> > > I don't want to add sysctl knobs for this unless absolutely needed; its already
> > > possible to 'force' eviction cycle by running 'conntrack -L'.
> > > 
> > Sure, but this is not a "real" solution, just a workaround.
> > We need to find a way to deliver conntrack deletion events in a reasonable
> > delay, whatever the traffic on the machine is.
> 
> Agree, but that depends on what 'reasonable' means and what kind of
> uneeded cpu churn we're willing to add.
> 
> We can add a sysctl for this but we should use a low default to not do
> too much unneeded work.
> 
> So what about your original patch, but only add
> 
> nf_conntrack_gc_interval
> 
> (and also add instant-resched in case entire budget was consumed)?

I would prefer not to expose sysctl knobs, if we don't really know
what good default values are good, then we cannot expect our users to
know this for us.

I would go tune this in a way that this resembles to the previous
behaviour.
Florian Westphal Oct. 14, 2016, 11:16 a.m. UTC | #6
Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> I would prefer not to expose sysctl knobs, if we don't really know
> what good default values are good, then we cannot expect our users to
> know this for us.
> 
> I would go tune this in a way that this resembles to the previous
> behaviour.

I do not see how this is possible without reverting to old per-conntrack
timer scheme.

With per-ct timer userspace gets notified the moment the timer
fires, without it notification comes 'when kernel detects the timeout'
which in worst case, as Nicholas describes, is when gc worker comes
along.

You can run the gc worker every jiffie of course, but thats just
wasting cpu cycles (and you still get a small delay).

I don't see a way to do run-time tuning except faster restarts when
old entries start accumulating.  This is what the code tries to do,
perhaps you have a better idea for the 'next gc run' computation.
diff mbox

Patch

diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -983,7 +983,7 @@  static void gc_worker(struct work_struct *work)
                return;
 
        ratio = scanned ? expired_count * 100 / scanned : 0;
-       if (ratio >= 90)
+       if (ratio >= 90 || expired_count == GC_MAX_EVICTS)
                next_run = 0;