diff mbox

[RFC,1/2] flow: virtualize get and entry deletion methods

Message ID 1269509091-6440-2-git-send-email-timo.teras@iki.fi
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Timo Teras March 25, 2010, 9:24 a.m. UTC
This allows to validate the cached object before returning it.
It also allows to destruct object properly, if the last reference
was held in flow cache. This is also a prepartion for caching
bundles in the flow cache.

In return for virtualizing the methods, we save on:
- not having to regenerate the whole flow cache on policy removal:
  each flow matching a killed policy gets refreshed as the getter
  function notices it smartly.
- we do not have to call flow_cache_flush from policy gc, since the
  flow cache now properly deletes the object if it had any references

This also means the flow cache entry deletion does more work. If
it's too slow now, may have to implement delayed deletion of flow
cache entries. But this is a save because this enables immediate
deletion of policies and bundles.

Signed-off-by: Timo Teras <timo.teras@iki.fi>
---
 include/net/flow.h     |   17 ++++++--
 include/net/xfrm.h     |    2 +
 net/core/flow.c        |  102 ++++++++++++++++++++++++----------------------
 net/xfrm/xfrm_policy.c |  105 +++++++++++++++++++++++++++++++-----------------
 4 files changed, 136 insertions(+), 90 deletions(-)

Comments

David Miller March 25, 2010, 7:26 p.m. UTC | #1
From: Timo Teras <timo.teras@iki.fi>
Date: Thu, 25 Mar 2010 11:24:50 +0200

> This allows to validate the cached object before returning it.
> It also allows to destruct object properly, if the last reference
> was held in flow cache. This is also a prepartion for caching
> bundles in the flow cache.
> 
> In return for virtualizing the methods, we save on:
> - not having to regenerate the whole flow cache on policy removal:
>   each flow matching a killed policy gets refreshed as the getter
>   function notices it smartly.
> - we do not have to call flow_cache_flush from policy gc, since the
>   flow cache now properly deletes the object if it had any references
> 
> This also means the flow cache entry deletion does more work. If
> it's too slow now, may have to implement delayed deletion of flow
> cache entries. But this is a save because this enables immediate
> deletion of policies and bundles.
> 
> Signed-off-by: Timo Teras <timo.teras@iki.fi>

I'm concerned about the new costs being added here.

We have to now take the policy lock as a reader every time the flow
cache wants to grab a reference.  So we now have this plus the
indirect function call new overhead.

Maybe we can make the dead state check safe to do asynchronously
somehow?  I wonder if the policy layer is overdue for an RCU
conversion or similar.

Anyways, something to think about.  Otherwise I don't mind these
changes.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Timo Teras March 26, 2010, 6:17 a.m. UTC | #2
David Miller wrote:
> From: Timo Teras <timo.teras@iki.fi>
> Date: Thu, 25 Mar 2010 11:24:50 +0200
> 
>> This allows to validate the cached object before returning it.
>> It also allows to destruct object properly, if the last reference
>> was held in flow cache. This is also a prepartion for caching
>> bundles in the flow cache.
>>
>> In return for virtualizing the methods, we save on:
>> - not having to regenerate the whole flow cache on policy removal:
>>   each flow matching a killed policy gets refreshed as the getter
>>   function notices it smartly.
>> - we do not have to call flow_cache_flush from policy gc, since the
>>   flow cache now properly deletes the object if it had any references
>>
>> This also means the flow cache entry deletion does more work. If
>> it's too slow now, may have to implement delayed deletion of flow
>> cache entries. But this is a save because this enables immediate
>> deletion of policies and bundles.
>>
>> Signed-off-by: Timo Teras <timo.teras@iki.fi>
> 
> I'm concerned about the new costs being added here.
> 
> We have to now take the policy lock as a reader every time the flow
> cache wants to grab a reference.  So we now have this plus the
> indirect function call new overhead.

If we want to have the flow cache generic, we pretty much need
indirect calls. But considering that it might make sense to cache
bundles, or "xfrm cache entries" on all flow directions (so we can
track both the main and sub policies) we could make it specialized.

> Maybe we can make the dead state check safe to do asynchronously
> somehow?  I wonder if the policy layer is overdue for an RCU
> conversion or similar.

I looked at the code and the policy lock is not needed much anymore.
I think it was most heavily used to protected ->bundles which is
now removed. But yes, I also previously said that ->walk.dead should
probably be converted to atomic_t. It is only written once when
the policy is killed. So we can make it accessible without the lock.

Considering that the whole cache was broken previously, and we
needed to take write lock on policy for each forwarded packet,
it does not sound that bad. Apparently locally originating traffic
directly to xfrm destination (not via gre) would get cached on the
socket dst cache and avoids the xfrm_lookup on fast path entirely(?).

We can get away from the per-cpu design with RCU hash. But I think
we still need to track the hash entries similar to this. Though,
there's probably some other tricks doable with RCU which I'm not
all familiar with. I will take a quick look on the rcu thingy
Herbert mentioned earlier.

> Anyways, something to think about.  Otherwise I don't mind these
> changes.

Ok, I'll add "convert walk.dead to atomic_t" so we can access it
without a lock.

I did also notice that the policy locking is not right exactly.
E.g. migration can touch templates, and we read templates currently
without locks. So I think bundle creation should be protected
with policy read lock. But even this can probably be avoided by
RCU type bundle creation. We just take bundle genid before starting
to create it, create bundle, and check if genid was changed while
doing this we retry.

We might even get away from policy->lock all together. In most
places it's only used to protect walk.dead. And bundle creation
can be synchronizes as above. The only remaining place seems to
be the timer function. I think it's safe to remove locking there
too, and synchronize using timer deletion. All this is because
any changes to policy will result in xfrm_policy replacement:
the old is killed and new one inserted atomically.

Do you think this would work?

- Timo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 8:40 a.m. UTC | #3
On Thu, Mar 25, 2010 at 12:26:11PM -0700, David Miller wrote:
>
> Maybe we can make the dead state check safe to do asynchronously
> somehow?  I wonder if the policy layer is overdue for an RCU
> conversion or similar.

In fact, now that I read this again, we don't even need to grab
the lock to perform the deadness check.  This is because the
existing never did it anyway.  The liveliness is guaranteed by
the policy destruction code doing a synchronous cache flush.

Timo, what was the reason for getting rid of the synchronous
flush again?

Thanks,
Timo Teras March 29, 2010, 9 a.m. UTC | #4
Herbert Xu wrote:
> On Thu, Mar 25, 2010 at 12:26:11PM -0700, David Miller wrote:
>> Maybe we can make the dead state check safe to do asynchronously
>> somehow?  I wonder if the policy layer is overdue for an RCU
>> conversion or similar.
> 
> In fact, now that I read this again, we don't even need to grab
> the lock to perform the deadness check.  This is because the
> existing never did it anyway.  The liveliness is guaranteed by
> the policy destruction code doing a synchronous cache flush.
> 
> Timo, what was the reason for getting rid of the synchronous
> flush again?

No, just having the flush call is not enough to guarantee
liveliness. The flushing happens in delayed work, and the flows
might be in use before the flush has been finished or even
started.

I was also hoping to move the "delayed" deletion part to
flow cache core so the code would be shared with policies and
bundles.

As stated before, we don't really need lock for the 'dead' check.
It's only written once, and actually read without lock in some
other places too. And all the other places that do take the lock,
release it right away making the resulting dead check result
"old" anyway. 

Looks to me that the whole policy locking is not up-to-date
anymore. Apparently the only place that actually needs it is
the migration code (which just happens to be broke anyway since
bundle creation does not take read lock). But it could be
relatively easily changed to replace policy with new
templates... and the whole policy stuff converted to RCU.

However, now that I've almost done the code ready. I'm thinking
if this is such a good idea or not. I was hoping to have bundles
always in the flow cache, but it's not sufficient. In case we
have socket bound policy that results in a bundle, we might need
create bundles outside the flow cache.

It turns out the generic flow cache might not be so easily
done that could host bundles and policies. Or at least the
shared code would not be as much as hoped for. Given that it
makes also sense to store other objects for outgoing stuff
(we might need reference to multiple policies if matching
a sub and main policy), it might be a consideration to make
the flow cache specialized to contain those objects. Or do
we have other possible users for the flow cache?

- Timo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 9:09 a.m. UTC | #5
On Mon, Mar 29, 2010 at 12:00:38PM +0300, Timo Teräs wrote:
>
>> Timo, what was the reason for getting rid of the synchronous
>> flush again?
>
> No, just having the flush call is not enough to guarantee
> liveliness. The flushing happens in delayed work, and the flows
> might be in use before the flush has been finished or even
> started.
>
> I was also hoping to move the "delayed" deletion part to
> flow cache core so the code would be shared with policies and
> bundles.
>
> As stated before, we don't really need lock for the 'dead' check.
> It's only written once, and actually read without lock in some
> other places too. And all the other places that do take the lock,
> release it right away making the resulting dead check result
> "old" anyway. 

No that's not the point.

The lock is not there to protect reading ->dead, which is atomic
anyway.

It's there to guarantee that you don't increment the ref count
on a dead policy.

For the flow cache we didn't need this because the policy code
would flush the cache synchronously so we can always grab a ref
count safely as long as BH is still off.

So if you leave the flow cache flushing as is, then we should
still be able to do the it without holding the lock, or checking
for deadness.

> Looks to me that the whole policy locking is not up-to-date
> anymore. Apparently the only place that actually needs it is
> the migration code (which just happens to be broke anyway since
> bundle creation does not take read lock). But it could be
> relatively easily changed to replace policy with new
> templates... and the whole policy stuff converted to RCU.

I wouldn't be surprised that the migration code is buggy.  But
that's orthogonal to your patch.

Cheers,
Timo Teras March 29, 2010, 10:07 a.m. UTC | #6
Herbert Xu wrote:
> On Mon, Mar 29, 2010 at 12:00:38PM +0300, Timo Teräs wrote:
>>> Timo, what was the reason for getting rid of the synchronous
>>> flush again?
>> No, just having the flush call is not enough to guarantee
>> liveliness. The flushing happens in delayed work, and the flows
>> might be in use before the flush has been finished or even
>> started.
>>
>> I was also hoping to move the "delayed" deletion part to
>> flow cache core so the code would be shared with policies and
>> bundles.
>>
>> As stated before, we don't really need lock for the 'dead' check.
>> It's only written once, and actually read without lock in some
>> other places too. And all the other places that do take the lock,
>> release it right away making the resulting dead check result
>> "old" anyway. 
> 
> No that's not the point.
> 
> The lock is not there to protect reading ->dead, which is atomic
> anyway.

No it's pretty much for reading ->dead what I can tell. That's
how ->dead is accessed in multiple other places too. That's the
only reason I added the locking in my new code.

But yes, it's pointless. ->dead access is atomic, except where
it's read/written to in xfrm_policy_kill. It's trivially
changeable to atomic_t and I have a patch for this.

> It's there to guarantee that you don't increment the ref count
> on a dead policy.

Previous code did not do any locking before adding a
reference. The lock is not needed for that.

> For the flow cache we didn't need this because the policy code
> would flush the cache synchronously so we can always grab a ref
> count safely as long as BH is still off.

The old code could return policy object that was killed. It
relies solely on the fact that policy gc will flush the flow
cache. Between the time of 'policy killed' and 'policy gc ran'
the old code would return policy object that is marked dead.
The change is an improvement in this regard as the flow cache
objects get refreshed immediately after marking a policy dead.

The reason for policy GC calling flush was there for two
reasons:
- purging the stale entries
- making sure that refcount of policy won't go to zero after
  releasing flow cache's reference (because the flow cache
  did only atomic_dec but did not call destructor)

Both issues are handled otherwise in the patch. By refreshing
stale entries immediately. Or calling virtual destructor when
the object gets deleted. Thus the slow flushing is not needed
as often now.

> So if you leave the flow cache flushing as is, then we should
> still be able to do the it without holding the lock, or checking
> for deadness.

We can still drop the locking, as ->dead can be made atomic_t.

Checking ->dead improves cache's speed to react to policy object
changes. And the virtual ->get is especially needed for bundles
as they can get stale a lot more often.

>> Looks to me that the whole policy locking is not up-to-date
>> anymore. Apparently the only place that actually needs it is
>> the migration code (which just happens to be broke anyway since
>> bundle creation does not take read lock). But it could be
>> relatively easily changed to replace policy with new
>> templates... and the whole policy stuff converted to RCU.
> 
> I wouldn't be surprised that the migration code is buggy.  But
> that's orthogonal to your patch.

Yeah. Just a note that the road map to RCU policies is trivial
and fixes some races in locking currently.

- Timo

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 10:26 a.m. UTC | #7
On Mon, Mar 29, 2010 at 01:07:50PM +0300, Timo Teräs wrote:
>
>> For the flow cache we didn't need this because the policy code
>> would flush the cache synchronously so we can always grab a ref
>> count safely as long as BH is still off.
>
> The old code could return policy object that was killed. It

Which is fine.  I'd rather have that than this new behaviour
which adds a lock.  We don't delete policies all the time, so
optimising for that case and pessimising the normal case is wrong!

> The reason for policy GC calling flush was there for two
> reasons:
> - purging the stale entries
> - making sure that refcount of policy won't go to zero after
>  releasing flow cache's reference (because the flow cache
>  did only atomic_dec but did not call destructor)
>
> Both issues are handled otherwise in the patch. By refreshing
> stale entries immediately. Or calling virtual destructor when
> the object gets deleted. Thus the slow flushing is not needed
> as often now.

Let's step back one second.  It's best to not accumulate unrelated
changes in one patch.  So is there a reason why you must remove
the synchronous flow cache flushing from the policy destruction
path? If not please move that into a different patch.

> We can still drop the locking, as ->dead can be made atomic_t.

No it doesn't need to be atomic, reading an int is always atomic.

> Checking ->dead improves cache's speed to react to policy object
> changes. And the virtual ->get is especially needed for bundles
> as they can get stale a lot more often.

I really see no point to optimising for such an unlikely case
but as long as you kill the locks I guess I'm not too bothered.

> Yeah. Just a note that the road map to RCU policies is trivial
> and fixes some races in locking currently.

Please do one change at a time.  Let's just focus on the original
issue of the bundle linked list for now.

Thanks,
Timo Teras March 29, 2010, 10:36 a.m. UTC | #8
Herbert Xu wrote:
> On Mon, Mar 29, 2010 at 01:07:50PM +0300, Timo Teräs wrote:
>>> For the flow cache we didn't need this because the policy code
>>> would flush the cache synchronously so we can always grab a ref
>>> count safely as long as BH is still off.
>> The old code could return policy object that was killed. It
> 
> Which is fine.  I'd rather have that than this new behaviour
> which adds a lock.  We don't delete policies all the time, so
> optimising for that case and pessimising the normal case is wrong!
> 
>> The reason for policy GC calling flush was there for two
>> reasons:
>> - purging the stale entries
>> - making sure that refcount of policy won't go to zero after
>>  releasing flow cache's reference (because the flow cache
>>  did only atomic_dec but did not call destructor)
>>
>> Both issues are handled otherwise in the patch. By refreshing
>> stale entries immediately. Or calling virtual destructor when
>> the object gets deleted. Thus the slow flushing is not needed
>> as often now.
> 
> Let's step back one second.  It's best to not accumulate unrelated
> changes in one patch.  So is there a reason why you must remove
> the synchronous flow cache flushing from the policy destruction
> path? If not please move that into a different patch.

Yes, I'm splitting up the patch to more fine grained pieces.

The synchronous flow cache flushing does not have to be removed,
but I consider it an improvement.

>> We can still drop the locking, as ->dead can be made atomic_t.
> 
> No it doesn't need to be atomic, reading an int is always atomic.

The only reason why it needs to be atomic is because of
xfrm_policy_kill() which writes '1' and checks if it was zero
previously. Since the idea is to get rid of the policy lock, we
can turn ->dead flag to atomic_t and use atomic_xchg for that.
Otherwise it would be ok to have it as just regular int.

>> Checking ->dead improves cache's speed to react to policy object
>> changes. And the virtual ->get is especially needed for bundles
>> as they can get stale a lot more often.
> 
> I really see no point to optimising for such an unlikely case
> but as long as you kill the locks I guess I'm not too bothered.

Agreed. But as the lockless check is cheap, why not to have it.
And some system do get policy changes quite a bit. IKE daemon
sometimes is configured to create policies on-demand so this does
have a real use case.

>> Yeah. Just a note that the road map to RCU policies is trivial
>> and fixes some races in locking currently.
> 
> Please do one change at a time.  Let's just focus on the original
> issue of the bundle linked list for now.

Yes. The way to do that is just a bit long. I've already added
some other stuff and split existing stuff in the patch. It's
currently seven patches. I'm still tracking one reference leak,
but I'll send in the new set soonish.

- Timo

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 11:10 a.m. UTC | #9
On Mon, Mar 29, 2010 at 01:36:40PM +0300, Timo Teräs wrote:
>
>>> We can still drop the locking, as ->dead can be made atomic_t.
>>
>> No it doesn't need to be atomic, reading an int is always atomic.
>
> The only reason why it needs to be atomic is because of
> xfrm_policy_kill() which writes '1' and checks if it was zero
> previously. Since the idea is to get rid of the policy lock, we
> can turn ->dead flag to atomic_t and use atomic_xchg for that.
> Otherwise it would be ok to have it as just regular int.

I don't see the point.  As long as the data paths do not take
the lock changing this doesn't buy us much.  You're still making
that cacheline exclusive.

Thanks,
Timo Teras March 29, 2010, 11:23 a.m. UTC | #10
Herbert Xu wrote:
> On Mon, Mar 29, 2010 at 01:36:40PM +0300, Timo Teräs wrote:
>>>> We can still drop the locking, as ->dead can be made atomic_t.
>>> No it doesn't need to be atomic, reading an int is always atomic.
>> The only reason why it needs to be atomic is because of
>> xfrm_policy_kill() which writes '1' and checks if it was zero
>> previously. Since the idea is to get rid of the policy lock, we
>> can turn ->dead flag to atomic_t and use atomic_xchg for that.
>> Otherwise it would be ok to have it as just regular int.
> 
> I don't see the point.  As long as the data paths do not take
> the lock changing this doesn't buy us much.  You're still making
> that cacheline exclusive.

To my understanding declaring an atomic_t, or reading it with
atomic_read does not make cache line exclusive. Only the atomic_*
writing to it take the cache line. And since this is done exactly
once for policy (or it's a bug/warn thingy) it does not impose
significant performance issue.

But looking at the code more. The check should not be needed.
xfrm_policy_kill() is only called if the entry is removed from
the hash list, which can happen only once.

Do you think we can just change it to unconditionally writing
to "policy->walk.dead = 1;" and be done with that?

Alternatively, we can move the ->dead check to be done while
holding the hash lock to guarantee no one else is writing
simultaneously.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 11:32 a.m. UTC | #11
On Mon, Mar 29, 2010 at 02:23:02PM +0300, Timo Teräs wrote:
>
>> I don't see the point.  As long as the data paths do not take
>> the lock changing this doesn't buy us much.  You're still making
>> that cacheline exclusive.
>
> To my understanding declaring an atomic_t, or reading it with
> atomic_read does not make cache line exclusive. Only the atomic_*
> writing to it take the cache line. And since this is done exactly
> once for policy (or it's a bug/warn thingy) it does not impose
> significant performance issue.

I was talking about the lock vs. atomic_xchg in xfrm_policy_kill.
There is practically no difference for that case.

Yes, on the read side the lock is a completely different beast
compared to atomic_read, but I don't see how you can safely
replace

	lock
	if (!dead)
		take ref
	unlock

without making other changes.

> But looking at the code more. The check should not be needed.
> xfrm_policy_kill() is only called if the entry is removed from
> the hash list, which can happen only once.
>
> Do you think we can just change it to unconditionally writing
> to "policy->walk.dead = 1;" and be done with that?
>
> Alternatively, we can move the ->dead check to be done while
> holding the hash lock to guarantee no one else is writing
> simultaneously.

See above.

Cheers,
Timo Teras March 29, 2010, 11:39 a.m. UTC | #12
Herbert Xu wrote:
> On Mon, Mar 29, 2010 at 02:23:02PM +0300, Timo Teräs wrote:
>>> I don't see the point.  As long as the data paths do not take
>>> the lock changing this doesn't buy us much.  You're still making
>>> that cacheline exclusive.
>> To my understanding declaring an atomic_t, or reading it with
>> atomic_read does not make cache line exclusive. Only the atomic_*
>> writing to it take the cache line. And since this is done exactly
>> once for policy (or it's a bug/warn thingy) it does not impose
>> significant performance issue.
> 
> I was talking about the lock vs. atomic_xchg in xfrm_policy_kill.
> There is practically no difference for that case.
> 
> Yes, on the read side the lock is a completely different beast
> compared to atomic_read, but I don't see how you can safely
> replace
> 
> 	lock
> 	if (!dead)
> 		take ref
> 	unlock
> 
> without making other changes.

Because the lock is not needed to take ref.

You can take ref as long as someone else is also holding a
valid reference.

The flow cache keeps a reference to each object it is holding,
thus we can always take a reference if we find an object there.
This is what is being done in the current code, and can be
still done in the new code.

The only reason my patch had the lock, was for the dead check.
Since it can be checked without locks, the locking can be
just removed.

The dead check is still an improvement: we find outdated cache
entries without doing a full flush and we find them faster than
using a full flush.

The old code would use policy entries with dead flag set, and
happily return and use them until the policy gc had an
opportunity to run.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 11:57 a.m. UTC | #13
On Mon, Mar 29, 2010 at 02:39:36PM +0300, Timo Teräs wrote:
>
>> Yes, on the read side the lock is a completely different beast
>> compared to atomic_read, but I don't see how you can safely
>> replace
>>
>> 	lock
>> 	if (!dead)
>> 		take ref
>> 	unlock
>>
>> without making other changes.
>
> Because the lock is not needed to take ref.
>
> You can take ref as long as someone else is also holding a
> valid reference.
>
> The flow cache keeps a reference to each object it is holding,
> thus we can always take a reference if we find an object there.
> This is what is being done in the current code, and can be
> still done in the new code.

I'm not talking about the flow cache.  The current flow cache
code doesn't even take the lock.

I'm talking about the other places that you have to convert in
order to make this into an atomic_t.

Cheers,
Timo Teras March 29, 2010, 12:03 p.m. UTC | #14
Herbert Xu wrote:
> I'm not talking about the flow cache.  The current flow cache
> code doesn't even take the lock.
>
> I'm talking about the other places that you have to convert in
> order to make this into an atomic_t.

Did you check the other places?

All other places do:
  fox x policies:
    lock(x)
    pol_dead |= x->walk.dead;
    unlock(x)
  if pol_dead
    abort

or similar.

And some cases don't even bother to lock the policy currently
when reading walk.dead.

All of the code treats the walk.dead as a hint. It does not need
strong synchronization with a lock.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 12:11 p.m. UTC | #15
On Mon, Mar 29, 2010 at 03:03:26PM +0300, Timo Teräs wrote:
> Herbert Xu wrote:
>> I'm not talking about the flow cache.  The current flow cache
>> code doesn't even take the lock.
>>
>> I'm talking about the other places that you have to convert in
>> order to make this into an atomic_t.
>
> Did you check the other places?
>
> All other places do:
>  fox x policies:
>    lock(x)
>    pol_dead |= x->walk.dead;
>    unlock(x)
>  if pol_dead
>    abort
>
> or similar.
>
> And some cases don't even bother to lock the policy currently
> when reading walk.dead.
>
> All of the code treats the walk.dead as a hint. It does not need
> strong synchronization with a lock.

Well then converting it to an atomic_t is completely pointless.
Timo Teras March 29, 2010, 12:20 p.m. UTC | #16
Herbert Xu wrote:
> On Mon, Mar 29, 2010 at 03:03:26PM +0300, Timo Teräs wrote:
>> All of the code treats the walk.dead as a hint. It does not need
>> strong synchronization with a lock.
> 
> Well then converting it to an atomic_t is completely pointless.

Yes, I came to same conclusion. The only thing I thought justifying
it, was the xfrm_policy_kill() doing the check of the old value.

But as noted few mails ago, it's not necessary. So I'll just go
ahead and remove all locking from the read side, and move the
xfrm_policy_kill to use plain write.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 12:25 p.m. UTC | #17
On Mon, Mar 29, 2010 at 03:20:13PM +0300, Timo Teräs wrote:
>
> But as noted few mails ago, it's not necessary. So I'll just go
> ahead and remove all locking from the read side, and move the
> xfrm_policy_kill to use plain write.

No you can't make it a plain write in xfrm_policy_kill.  The same
policy may be killed simultaneously, by the timer and user action.

Cheers,
Timo Teras March 29, 2010, 12:33 p.m. UTC | #18
Herbert Xu wrote:
> On Mon, Mar 29, 2010 at 03:20:13PM +0300, Timo Teräs wrote:
>> But as noted few mails ago, it's not necessary. So I'll just go
>> ahead and remove all locking from the read side, and move the
>> xfrm_policy_kill to use plain write.
> 
> No you can't make it a plain write in xfrm_policy_kill.  The same
> policy may be killed simultaneously, by the timer and user action.

So we fix up all the callers of xfrm_policy_kill to check properly
result of __xfrm_policy_unlink(). Since the policy can be only
once deleted from the hashes (it's protected by xfrm_policy_lock)
return value of __xfrm_policy_unlink() can be used to give
responsibility of calling xfrm_policy_kill() exactly once.

I thought this was already being done, but apparently it's not
the case.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu March 29, 2010, 12:45 p.m. UTC | #19
On Mon, Mar 29, 2010 at 03:33:57PM +0300, Timo Teräs wrote:
>
> So we fix up all the callers of xfrm_policy_kill to check properly
> result of __xfrm_policy_unlink(). Since the policy can be only
> once deleted from the hashes (it's protected by xfrm_policy_lock)
> return value of __xfrm_policy_unlink() can be used to give
> responsibility of calling xfrm_policy_kill() exactly once.
>
> I thought this was already being done, but apparently it's not
> the case.

Actually you're right.  This should be the case as otherwise
we'd be triggering that WARN_ON.

Since it hasn't triggered in the five years that it's been around,
I suppose we can now remove it along with the lock.

Thanks,
diff mbox

Patch

diff --git a/include/net/flow.h b/include/net/flow.h
index 809970b..68fea54 100644
--- a/include/net/flow.h
+++ b/include/net/flow.h
@@ -86,11 +86,20 @@  struct flowi {
 
 struct net;
 struct sock;
-typedef int (*flow_resolve_t)(struct net *net, struct flowi *key, u16 family,
-			      u8 dir, void **objp, atomic_t **obj_refp);
 
-extern void *flow_cache_lookup(struct net *net, struct flowi *key, u16 family,
-			       u8 dir, flow_resolve_t resolver);
+struct flow_cache_entry_ops {
+	struct flow_cache_entry_ops ** (*get)(struct flow_cache_entry_ops **);
+	void (*delete)(struct flow_cache_entry_ops **);
+};
+
+typedef struct flow_cache_entry_ops **(*flow_resolve_t)(
+		struct net *net, struct flowi *key, u16 family,
+		u8 dir, struct flow_cache_entry_ops **old_ops);
+
+extern struct flow_cache_entry_ops **flow_cache_lookup(
+		struct net *net, struct flowi *key, u16 family,
+		u8 dir, flow_resolve_t resolver);
+
 extern void flow_cache_flush(void);
 extern atomic_t flow_cache_genid;
 
diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index d74e080..cb8934b 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -19,6 +19,7 @@ 
 #include <net/route.h>
 #include <net/ipv6.h>
 #include <net/ip6_fib.h>
+#include <net/flow.h>
 
 #include <linux/interrupt.h>
 
@@ -481,6 +482,7 @@  struct xfrm_policy {
 	atomic_t		refcnt;
 	struct timer_list	timer;
 
+	struct flow_cache_entry_ops *fc_ops;
 	u32			priority;
 	u32			index;
 	struct xfrm_mark	mark;
diff --git a/net/core/flow.c b/net/core/flow.c
index 9601587..dfbf3c9 100644
--- a/net/core/flow.c
+++ b/net/core/flow.c
@@ -26,13 +26,12 @@ 
 #include <linux/security.h>
 
 struct flow_cache_entry {
-	struct flow_cache_entry	*next;
-	u16			family;
-	u8			dir;
-	u32			genid;
-	struct flowi		key;
-	void			*object;
-	atomic_t		*object_ref;
+	struct flow_cache_entry		*next;
+	u16				family;
+	u8				dir;
+	u32				genid;
+	struct flowi			key;
+	struct flow_cache_entry_ops	**ops;
 };
 
 atomic_t flow_cache_genid = ATOMIC_INIT(0);
@@ -86,8 +85,8 @@  static void flow_cache_new_hashrnd(unsigned long arg)
 
 static void flow_entry_kill(int cpu, struct flow_cache_entry *fle)
 {
-	if (fle->object)
-		atomic_dec(fle->object_ref);
+	if (fle->ops)
+		(*fle->ops)->delete(fle->ops);
 	kmem_cache_free(flow_cachep, fle);
 	flow_count(cpu)--;
 }
@@ -165,10 +164,12 @@  static int flow_key_compare(struct flowi *key1, struct flowi *key2)
 	return 0;
 }
 
-void *flow_cache_lookup(struct net *net, struct flowi *key, u16 family, u8 dir,
-			flow_resolve_t resolver)
+struct flow_cache_entry_ops **flow_cache_lookup(
+		struct net *net, struct flowi *key, u16 family, u8 dir,
+		flow_resolve_t resolver)
 {
 	struct flow_cache_entry *fle, **head;
+	struct flow_cache_entry_ops **ops;
 	unsigned int hash;
 	int cpu;
 
@@ -176,6 +177,8 @@  void *flow_cache_lookup(struct net *net, struct flowi *key, u16 family, u8 dir,
 	cpu = smp_processor_id();
 
 	fle = NULL;
+	ops = NULL;
+
 	/* Packet really early in init?  Making flow_cache_init a
 	 * pre-smp initcall would solve this.  --RR */
 	if (!flow_table(cpu))
@@ -187,26 +190,35 @@  void *flow_cache_lookup(struct net *net, struct flowi *key, u16 family, u8 dir,
 
 	head = &flow_table(cpu)[hash];
 	for (fle = *head; fle; fle = fle->next) {
-		if (fle->family == family &&
-		    fle->dir == dir &&
-		    flow_key_compare(key, &fle->key) == 0) {
-			if (fle->genid == atomic_read(&flow_cache_genid)) {
-				void *ret = fle->object;
-
-				if (ret)
-					atomic_inc(fle->object_ref);
-				local_bh_enable();
-
-				return ret;
+		if (fle->family != family ||
+		    fle->dir != dir ||
+		    flow_key_compare(key, &fle->key) != 0)
+			continue;
+
+		ops = fle->ops;
+		if (fle->genid == atomic_read(&flow_cache_genid)) {
+			if (ops) {
+				ops = (*ops)->get(ops);
+				if (ops) {
+					local_bh_enable();
+					return ops;
+				}
+				ops = fle->ops;
 			}
-			break;
+		} else {
+			if (ops)
+				(*ops)->delete(ops);
+			fle->ops = NULL;
+			ops = NULL;
 		}
+		break;
 	}
 
 	if (!fle) {
 		if (flow_count(cpu) > flow_hwm)
 			flow_cache_shrink(cpu);
 
+		ops = NULL;
 		fle = kmem_cache_alloc(flow_cachep, GFP_ATOMIC);
 		if (fle) {
 			fle->next = *head;
@@ -214,36 +226,28 @@  void *flow_cache_lookup(struct net *net, struct flowi *key, u16 family, u8 dir,
 			fle->family = family;
 			fle->dir = dir;
 			memcpy(&fle->key, key, sizeof(*key));
-			fle->object = NULL;
+			fle->ops = NULL;
 			flow_count(cpu)++;
 		}
 	}
 
 nocache:
-	{
-		int err;
-		void *obj;
-		atomic_t *obj_ref;
-
-		err = resolver(net, key, family, dir, &obj, &obj_ref);
-
-		if (fle && !err) {
-			fle->genid = atomic_read(&flow_cache_genid);
-
-			if (fle->object)
-				atomic_dec(fle->object_ref);
-
-			fle->object = obj;
-			fle->object_ref = obj_ref;
-			if (obj)
-				atomic_inc(fle->object_ref);
+	ops = resolver(net, key, family, dir, ops);
+	if (fle) {
+		fle->genid = atomic_read(&flow_cache_genid);
+		if (IS_ERR(ops)) {
+			fle->genid--;
+			fle->ops = NULL;
+		} else {
+			fle->ops = ops;
 		}
-		local_bh_enable();
-
-		if (err)
-			obj = ERR_PTR(err);
-		return obj;
+	} else {
+		if (ops && !IS_ERR(ops))
+			(*ops)->delete(ops);
 	}
+	local_bh_enable();
+
+	return ops;
 }
 
 static void flow_cache_flush_tasklet(unsigned long data)
@@ -260,11 +264,11 @@  static void flow_cache_flush_tasklet(unsigned long data)
 		for (; fle; fle = fle->next) {
 			unsigned genid = atomic_read(&flow_cache_genid);
 
-			if (!fle->object || fle->genid == genid)
+			if (!fle->ops || fle->genid == genid)
 				continue;
 
-			fle->object = NULL;
-			atomic_dec(fle->object_ref);
+			(*fle->ops)->delete(fle->ops);
+			fle->ops = NULL;
 		}
 	}
 
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index 843e066..a0fa804 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -216,6 +216,30 @@  expired:
 	xfrm_pol_put(xp);
 }
 
+static struct flow_cache_entry_ops **xfrm_policy_get_fce(
+		struct flow_cache_entry_ops **ops)
+{
+	struct xfrm_policy *pol = container_of(ops, struct xfrm_policy, fc_ops);
+
+	read_lock(&pol->lock);
+	if (pol->walk.dead)
+		ops = NULL;
+	else
+		xfrm_pol_hold(pol);
+	read_unlock(&pol->lock);
+
+	return ops;
+}
+
+static void xfrm_policy_delete_fce(struct flow_cache_entry_ops **ops)
+{
+	xfrm_pol_put(container_of(ops, struct xfrm_policy, fc_ops));
+}
+
+static struct flow_cache_entry_ops xfrm_policy_fc_ops __read_mostly = {
+	.get = xfrm_policy_get_fce,
+	.delete = xfrm_policy_delete_fce,
+};
 
 /* Allocate xfrm_policy. Not used here, it is supposed to be used by pfkeyv2
  * SPD calls.
@@ -236,6 +260,7 @@  struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp)
 		atomic_set(&policy->refcnt, 1);
 		setup_timer(&policy->timer, xfrm_policy_timer,
 				(unsigned long)policy);
+		policy->fc_ops = &xfrm_policy_fc_ops;
 	}
 	return policy;
 }
@@ -269,9 +294,6 @@  static void xfrm_policy_gc_kill(struct xfrm_policy *policy)
 	if (del_timer(&policy->timer))
 		atomic_dec(&policy->refcnt);
 
-	if (atomic_read(&policy->refcnt) > 1)
-		flow_cache_flush();
-
 	xfrm_pol_put(policy);
 }
 
@@ -671,10 +693,8 @@  struct xfrm_policy *xfrm_policy_bysel_ctx(struct net *net, u32 mark, u8 type,
 	}
 	write_unlock_bh(&xfrm_policy_lock);
 
-	if (ret && delete) {
-		atomic_inc(&flow_cache_genid);
+	if (ret && delete)
 		xfrm_policy_kill(ret);
-	}
 	return ret;
 }
 EXPORT_SYMBOL(xfrm_policy_bysel_ctx);
@@ -713,10 +733,8 @@  struct xfrm_policy *xfrm_policy_byid(struct net *net, u32 mark, u8 type,
 	}
 	write_unlock_bh(&xfrm_policy_lock);
 
-	if (ret && delete) {
-		atomic_inc(&flow_cache_genid);
+	if (ret && delete)
 		xfrm_policy_kill(ret);
-	}
 	return ret;
 }
 EXPORT_SYMBOL(xfrm_policy_byid);
@@ -835,7 +853,6 @@  int xfrm_policy_flush(struct net *net, u8 type, struct xfrm_audit *audit_info)
 	}
 	if (!cnt)
 		err = -ESRCH;
-	atomic_inc(&flow_cache_genid);
 out:
 	write_unlock_bh(&xfrm_policy_lock);
 	return err;
@@ -989,32 +1006,35 @@  fail:
 	return ret;
 }
 
-static int xfrm_policy_lookup(struct net *net, struct flowi *fl, u16 family,
-			      u8 dir, void **objp, atomic_t **obj_refp)
+static struct flow_cache_entry_ops **xfrm_policy_lookup(
+		struct net *net, struct flowi *fl, u16 family,
+		u8 dir, struct flow_cache_entry_ops **old_ops)
 {
 	struct xfrm_policy *pol;
-	int err = 0;
+
+	if (old_ops)
+		xfrm_pol_put(container_of(old_ops, struct xfrm_policy, fc_ops));
 
 #ifdef CONFIG_XFRM_SUB_POLICY
 	pol = xfrm_policy_lookup_bytype(net, XFRM_POLICY_TYPE_SUB, fl, family, dir);
-	if (IS_ERR(pol)) {
-		err = PTR_ERR(pol);
-		pol = NULL;
-	}
-	if (pol || err)
-		goto end;
+	if (IS_ERR(pol))
+		return (void *) pol;
+	if (pol)
+		goto found;
 #endif
 	pol = xfrm_policy_lookup_bytype(net, XFRM_POLICY_TYPE_MAIN, fl, family, dir);
-	if (IS_ERR(pol)) {
-		err = PTR_ERR(pol);
-		pol = NULL;
-	}
-#ifdef CONFIG_XFRM_SUB_POLICY
-end:
-#endif
-	if ((*objp = (void *) pol) != NULL)
-		*obj_refp = &pol->refcnt;
-	return err;
+	if (IS_ERR(pol))
+		return (void *) pol;
+	if (pol)
+		goto found;
+	return NULL;
+
+found:
+	/* Resolver returns two references:
+	 * one for cache and one for caller of flow_cache_lookup() */
+	xfrm_pol_hold(pol);
+
+	return &pol->fc_ops;
 }
 
 static inline int policy_to_flow_dir(int dir)
@@ -1104,8 +1124,6 @@  int xfrm_policy_delete(struct xfrm_policy *pol, int dir)
 	pol = __xfrm_policy_unlink(pol, dir);
 	write_unlock_bh(&xfrm_policy_lock);
 	if (pol) {
-		if (dir < XFRM_POLICY_MAX)
-			atomic_inc(&flow_cache_genid);
 		xfrm_policy_kill(pol);
 		return 0;
 	}
@@ -1588,18 +1606,24 @@  restart:
 	}
 
 	if (!policy) {
+		struct flow_cache_entry_ops **ops;
+
 		/* To accelerate a bit...  */
 		if ((dst_orig->flags & DST_NOXFRM) ||
 		    !net->xfrm.policy_count[XFRM_POLICY_OUT])
 			goto nopol;
 
-		policy = flow_cache_lookup(net, fl, dst_orig->ops->family,
-					   dir, xfrm_policy_lookup);
-		err = PTR_ERR(policy);
-		if (IS_ERR(policy)) {
+		ops = flow_cache_lookup(net, fl, dst_orig->ops->family,
+					dir, xfrm_policy_lookup);
+		err = PTR_ERR(ops);
+		if (IS_ERR(ops)) {
 			XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTPOLERROR);
 			goto dropdst;
 		}
+		if (ops)
+			policy = container_of(ops, struct xfrm_policy, fc_ops);
+		else
+			policy = NULL;
 	}
 
 	if (!policy)
@@ -1952,9 +1976,16 @@  int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
 		}
 	}
 
-	if (!pol)
-		pol = flow_cache_lookup(net, &fl, family, fl_dir,
+	if (!pol) {
+		struct flow_cache_entry_ops **ops;
+
+		ops = flow_cache_lookup(net, &fl, family, fl_dir,
 					xfrm_policy_lookup);
+		if (IS_ERR(ops))
+			pol = (void *) ops;
+		else if (ops)
+			pol = container_of(ops, struct xfrm_policy, fc_ops);
+	}
 
 	if (IS_ERR(pol)) {
 		XFRM_INC_STATS(net, LINUX_MIB_XFRMINPOLERROR);