diff mbox

net/mlx4_en: protect ring->xdp_prog with rcu_read_lock

Message ID 20160826203808.23664-1-bblanco@plumgrid.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Brenden Blanco Aug. 26, 2016, 8:38 p.m. UTC
Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
freed despite the use of call_rcu inside bpf_prog_put. The situation is
possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
callback for destroying the bpf prog can run even during the bh handling
in the mlx4 rx path.

Several options were considered before this patch was settled on:

Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
of the rings are updated with the new program.
This approach has the disadvantage that as the number of rings
increases, the speed of udpate will slow down significantly due to
napi_synchronize's msleep(1).

Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
The action of the bpf_prog_put_bh would be to then call bpf_prog_put
later. Those drivers that consume a bpf prog in a bh context (like mlx4)
would then use the bpf_prog_put_bh instead when the ring is up. This has
the problem of complexity, in maintaining proper refcnts and rcu lists,
and would likely be harder to review. In addition, this approach to
freeing must be exclusive with other frees of the bpf prog, for instance
a _bh prog must not be referenced from a prog array that is consumed by
a non-_bh prog.

The placement of rcu_read_lock in this patch is functionally the same as
putting an rcu_read_lock in napi_poll. Actually doing so could be a
potentially controversial change, but would bring the implementation in
line with sk_busy_loop (though of course the nature of those two paths
is substantially different), and would also avoid future copy/paste
problems with future supporters of XDP. Still, this patch does not take
that opinionated option.

Testing was done with kernels in either PREEMPT_RCU=y or
CONFIG_PREEMPT_VOLUNTARY=y+PREEMPT_RCU=n modes, with neither exhibiting
any drawback. With PREEMPT_RCU=n, the extra call to rcu_read_lock did
not show up in the perf report whatsoever, and with PREEMPT_RCU=y the
overhead of rcu_read_lock (according to perf) was the same before/after.
In the rx path, rcu_read_lock is eventually called for every packet
from netif_receive_skb_internal, so the napi poll call's rcu_read_lock
is easily amortized.

Fixes: d576acf0a22 ("net/mlx4_en: add page recycle to prepare rx ring for tx support")
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: Brenden Blanco <bblanco@plumgrid.com>
---
 drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Brenden Blanco Aug. 26, 2016, 9:01 p.m. UTC | #1
On Fri, Aug 26, 2016 at 01:38:08PM -0700, Brenden Blanco wrote:
> Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
> freed despite the use of call_rcu inside bpf_prog_put. The situation is
> possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
> callback for destroying the bpf prog can run even during the bh handling
> in the mlx4 rx path.
> 
> Several options were considered before this patch was settled on:
> 
> Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
> of the rings are updated with the new program.
> This approach has the disadvantage that as the number of rings
> increases, the speed of udpate will slow down significantly due to
I belatedly ran checkpatch, which pointed out the spelling mistake here.
I will update the udpate after waiting to see what other discussion
happens on this.
> napi_synchronize's msleep(1).
> 
> Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
> The action of the bpf_prog_put_bh would be to then call bpf_prog_put
> later. Those drivers that consume a bpf prog in a bh context (like mlx4)
> would then use the bpf_prog_put_bh instead when the ring is up. This has
> the problem of complexity, in maintaining proper refcnts and rcu lists,
> and would likely be harder to review. In addition, this approach to
> freeing must be exclusive with other frees of the bpf prog, for instance
> a _bh prog must not be referenced from a prog array that is consumed by
> a non-_bh prog.
> 
> The placement of rcu_read_lock in this patch is functionally the same as
> putting an rcu_read_lock in napi_poll. Actually doing so could be a
> potentially controversial change, but would bring the implementation in
> line with sk_busy_loop (though of course the nature of those two paths
> is substantially different), and would also avoid future copy/paste
> problems with future supporters of XDP. Still, this patch does not take
> that opinionated option.
> 
> Testing was done with kernels in either PREEMPT_RCU=y or
> CONFIG_PREEMPT_VOLUNTARY=y+PREEMPT_RCU=n modes, with neither exhibiting
> any drawback. With PREEMPT_RCU=n, the extra call to rcu_read_lock did
> not show up in the perf report whatsoever, and with PREEMPT_RCU=y the
> overhead of rcu_read_lock (according to perf) was the same before/after.
> In the rx path, rcu_read_lock is eventually called for every packet
> from netif_receive_skb_internal, so the napi poll call's rcu_read_lock
> is easily amortized.
> 
> Fixes: d576acf0a22 ("net/mlx4_en: add page recycle to prepare rx ring for tx support")
> Acked-by: Daniel Borkmann <daniel@iogearbox.net>
> Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
> Signed-off-by: Brenden Blanco <bblanco@plumgrid.com>
> ---
>  drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> index 2040dad..efed546 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> @@ -800,6 +800,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
>  	if (budget <= 0)
>  		return polled;
>  
> +	rcu_read_lock();
>  	xdp_prog = READ_ONCE(ring->xdp_prog);
>  	doorbell_pending = 0;
>  	tx_index = (priv->tx_ring_num - priv->xdp_ring_num) + cq->ring;
> @@ -1077,6 +1078,7 @@ consumed:
>  	}
>  
>  out:
> +	rcu_read_unlock();
>  	if (doorbell_pending)
>  		mlx4_en_xmit_doorbell(priv->tx_ring[tx_index]);
>  
> -- 
> 2.9.3
>
Tariq Toukan Aug. 29, 2016, 2:59 p.m. UTC | #2
Hi Brenden,

The solution direction should be XDP specific that does not hurt the 
regular flow.

On 26/08/2016 11:38 PM, Brenden Blanco wrote:
> Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
> freed despite the use of call_rcu inside bpf_prog_put. The situation is
> possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
> callback for destroying the bpf prog can run even during the bh handling
> in the mlx4 rx path.
>
> Several options were considered before this patch was settled on:
>
> Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
> of the rings are updated with the new program.
> This approach has the disadvantage that as the number of rings
> increases, the speed of udpate will slow down significantly due to
> napi_synchronize's msleep(1).
I prefer this option as it doesn't hurt the data path. A delay in a 
control command can be tolerated.
> Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
> The action of the bpf_prog_put_bh would be to then call bpf_prog_put
> later. Those drivers that consume a bpf prog in a bh context (like mlx4)
> would then use the bpf_prog_put_bh instead when the ring is up. This has
> the problem of complexity, in maintaining proper refcnts and rcu lists,
> and would likely be harder to review. In addition, this approach to
> freeing must be exclusive with other frees of the bpf prog, for instance
> a _bh prog must not be referenced from a prog array that is consumed by
> a non-_bh prog.
>
> The placement of rcu_read_lock in this patch is functionally the same as
> putting an rcu_read_lock in napi_poll. Actually doing so could be a
> potentially controversial change, but would bring the implementation in
> line with sk_busy_loop (though of course the nature of those two paths
> is substantially different), and would also avoid future copy/paste
> problems with future supporters of XDP. Still, this patch does not take
> that opinionated option.
So you decided to add a lock for all non-XDP flows, which are 99% of the 
cases.
We should avoid this.
>
> Testing was done with kernels in either PREEMPT_RCU=y or
> CONFIG_PREEMPT_VOLUNTARY=y+PREEMPT_RCU=n modes, with neither exhibiting
> any drawback. With PREEMPT_RCU=n, the extra call to rcu_read_lock did
> not show up in the perf report whatsoever, and with PREEMPT_RCU=y the
> overhead of rcu_read_lock (according to perf) was the same before/after.
> In the rx path, rcu_read_lock is eventually called for every packet
> from netif_receive_skb_internal, so the napi poll call's rcu_read_lock
> is easily amortized.
For now, I don't agree with this fix.
Let me think about the options you suggested.
I also need to do my perf tests.

Regards,
Tariq
Brenden Blanco Aug. 29, 2016, 3:55 p.m. UTC | #3
On Mon, Aug 29, 2016 at 05:59:26PM +0300, Tariq Toukan wrote:
> Hi Brenden,
> 
> The solution direction should be XDP specific that does not hurt the
> regular flow.
An rcu_read_lock is _already_ taken for _every_ packet. This is 1/64th of
that.
> 
> On 26/08/2016 11:38 PM, Brenden Blanco wrote:
> >Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
> >freed despite the use of call_rcu inside bpf_prog_put. The situation is
> >possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
> >callback for destroying the bpf prog can run even during the bh handling
> >in the mlx4 rx path.
> >
> >Several options were considered before this patch was settled on:
> >
> >Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
> >of the rings are updated with the new program.
> >This approach has the disadvantage that as the number of rings
> >increases, the speed of udpate will slow down significantly due to
> >napi_synchronize's msleep(1).
> I prefer this option as it doesn't hurt the data path. A delay in a
> control command can be tolerated.
> >Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
> >The action of the bpf_prog_put_bh would be to then call bpf_prog_put
> >later. Those drivers that consume a bpf prog in a bh context (like mlx4)
> >would then use the bpf_prog_put_bh instead when the ring is up. This has
> >the problem of complexity, in maintaining proper refcnts and rcu lists,
> >and would likely be harder to review. In addition, this approach to
> >freeing must be exclusive with other frees of the bpf prog, for instance
> >a _bh prog must not be referenced from a prog array that is consumed by
> >a non-_bh prog.
> >
> >The placement of rcu_read_lock in this patch is functionally the same as
> >putting an rcu_read_lock in napi_poll. Actually doing so could be a
> >potentially controversial change, but would bring the implementation in
> >line with sk_busy_loop (though of course the nature of those two paths
> >is substantially different), and would also avoid future copy/paste
> >problems with future supporters of XDP. Still, this patch does not take
> >that opinionated option.
> So you decided to add a lock for all non-XDP flows, which are 99% of
> the cases.
> We should avoid this.
The whole point of rcu_read_lock architecture is to be taken in the fast
path. There won't be a performance impact from this patch.
> >
> >Testing was done with kernels in either PREEMPT_RCU=y or
> >CONFIG_PREEMPT_VOLUNTARY=y+PREEMPT_RCU=n modes, with neither exhibiting
> >any drawback. With PREEMPT_RCU=n, the extra call to rcu_read_lock did
> >not show up in the perf report whatsoever, and with PREEMPT_RCU=y the
> >overhead of rcu_read_lock (according to perf) was the same before/after.
> >In the rx path, rcu_read_lock is eventually called for every packet
> >from netif_receive_skb_internal, so the napi poll call's rcu_read_lock
> >is easily amortized.
> For now, I don't agree with this fix.
> Let me think about the options you suggested.
> I also need to do my perf tests.
> 
> Regards,
> Tariq
>
Tom Herbert Aug. 29, 2016, 5:46 p.m. UTC | #4
On Mon, Aug 29, 2016 at 8:55 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
> On Mon, Aug 29, 2016 at 05:59:26PM +0300, Tariq Toukan wrote:
>> Hi Brenden,
>>
>> The solution direction should be XDP specific that does not hurt the
>> regular flow.
> An rcu_read_lock is _already_ taken for _every_ packet. This is 1/64th of
> that.
>>
>> On 26/08/2016 11:38 PM, Brenden Blanco wrote:
>> >Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
>> >freed despite the use of call_rcu inside bpf_prog_put. The situation is
>> >possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
>> >callback for destroying the bpf prog can run even during the bh handling
>> >in the mlx4 rx path.
>> >
>> >Several options were considered before this patch was settled on:
>> >
>> >Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
>> >of the rings are updated with the new program.
>> >This approach has the disadvantage that as the number of rings
>> >increases, the speed of udpate will slow down significantly due to
>> >napi_synchronize's msleep(1).
>> I prefer this option as it doesn't hurt the data path. A delay in a
>> control command can be tolerated.
>> >Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
>> >The action of the bpf_prog_put_bh would be to then call bpf_prog_put
>> >later. Those drivers that consume a bpf prog in a bh context (like mlx4)
>> >would then use the bpf_prog_put_bh instead when the ring is up. This has
>> >the problem of complexity, in maintaining proper refcnts and rcu lists,
>> >and would likely be harder to review. In addition, this approach to
>> >freeing must be exclusive with other frees of the bpf prog, for instance
>> >a _bh prog must not be referenced from a prog array that is consumed by
>> >a non-_bh prog.
>> >
>> >The placement of rcu_read_lock in this patch is functionally the same as
>> >putting an rcu_read_lock in napi_poll. Actually doing so could be a
>> >potentially controversial change, but would bring the implementation in
>> >line with sk_busy_loop (though of course the nature of those two paths
>> >is substantially different), and would also avoid future copy/paste
>> >problems with future supporters of XDP. Still, this patch does not take
>> >that opinionated option.
>> So you decided to add a lock for all non-XDP flows, which are 99% of
>> the cases.
>> We should avoid this.
> The whole point of rcu_read_lock architecture is to be taken in the fast
> path. There won't be a performance impact from this patch.

+1, this is nothing at all like a spinlock and really this should be
just like any other rcu like access.

Brenden, tracking down how the structure is freed needed a few steps,
please make sure the RCU requirements are well documented. Also, I'm
still not a fan of using xchg to set the program, seems that a lock
could be used in that path.

Thanks,
Tom
Saeed Mahameed Aug. 30, 2016, 9:35 a.m. UTC | #5
On Mon, Aug 29, 2016 at 8:46 PM, Tom Herbert <tom@herbertland.com> wrote:
> On Mon, Aug 29, 2016 at 8:55 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
>> On Mon, Aug 29, 2016 at 05:59:26PM +0300, Tariq Toukan wrote:
>>> Hi Brenden,
>>>
>>> The solution direction should be XDP specific that does not hurt the
>>> regular flow.
>> An rcu_read_lock is _already_ taken for _every_ packet. This is 1/64th of

In other words "let's add  new small speed bump, we already have
plenty ahead, so why not slow down now anyway".

Every single new instruction hurts performance, in this case maybe you
are right, maybe we won't feel any performance
impact, but that doesn't mean it is ok to do this.


>> that.
>>>
>>> On 26/08/2016 11:38 PM, Brenden Blanco wrote:
>>> >Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
>>> >freed despite the use of call_rcu inside bpf_prog_put. The situation is
>>> >possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
>>> >callback for destroying the bpf prog can run even during the bh handling
>>> >in the mlx4 rx path.
>>> >
>>> >Several options were considered before this patch was settled on:
>>> >
>>> >Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
>>> >of the rings are updated with the new program.
>>> >This approach has the disadvantage that as the number of rings
>>> >increases, the speed of udpate will slow down significantly due to
>>> >napi_synchronize's msleep(1).
>>> I prefer this option as it doesn't hurt the data path. A delay in a
>>> control command can be tolerated.
>>> >Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
>>> >The action of the bpf_prog_put_bh would be to then call bpf_prog_put
>>> >later. Those drivers that consume a bpf prog in a bh context (like mlx4)
>>> >would then use the bpf_prog_put_bh instead when the ring is up. This has
>>> >the problem of complexity, in maintaining proper refcnts and rcu lists,
>>> >and would likely be harder to review. In addition, this approach to
>>> >freeing must be exclusive with other frees of the bpf prog, for instance
>>> >a _bh prog must not be referenced from a prog array that is consumed by
>>> >a non-_bh prog.
>>> >
>>> >The placement of rcu_read_lock in this patch is functionally the same as
>>> >putting an rcu_read_lock in napi_poll. Actually doing so could be a
>>> >potentially controversial change, but would bring the implementation in
>>> >line with sk_busy_loop (though of course the nature of those two paths
>>> >is substantially different), and would also avoid future copy/paste
>>> >problems with future supporters of XDP. Still, this patch does not take
>>> >that opinionated option.
>>> So you decided to add a lock for all non-XDP flows, which are 99% of
>>> the cases.
>>> We should avoid this.
>> The whole point of rcu_read_lock architecture is to be taken in the fast
>> path. There won't be a performance impact from this patch.
>
> +1, this is nothing at all like a spinlock and really this should be
> just like any other rcu like access.
>
> Brenden, tracking down how the structure is freed needed a few steps,
> please make sure the RCU requirements are well documented. Also, I'm
> still not a fan of using xchg to set the program, seems that a lock
> could be used in that path.
>
> Thanks,
> Tom

Sorry folks I am with Tariq on this, you can't just add a single
instruction which is only valid/needed for 1% of the use cases
to the driver's general data path, even if it was as cheap as one cpu cycle!

Let me try to suggest something:
instead of taking the rcu_read_lock for the whole
mlx4_en_process_rx_cq, we can minimize to XDP code path only
by double checking xdp_prog (non-protected check followed by a
protected check inside mlx4 XDP critical path).

i.e instead of:

rcu_read_lock();

xdp_prog = ring->xdp_prog;

//__Do lots of non-XDP related stuff__

if (xdp_prog) {
    //Do XDP magic ..
}
//__Do more of non-XDP related stuff__

rcu_read_unlock();


We can minimize it to XDP critical path only:

//Non protected xdp_prog dereference.
if (xdp_prog) {
     rcu_read_lock();
     //Protected dereference to ring->xdp_prog
     xdp_prog = ring->xdp_prog;
     if(unlikely(!xdp_prg)) goto unlock;
    //Do XDP magic ..

unlock:
     rcu_read_unlock();
}
Brenden Blanco Aug. 31, 2016, 1:50 a.m. UTC | #6
On Tue, Aug 30, 2016 at 12:35:58PM +0300, Saeed Mahameed wrote:
> On Mon, Aug 29, 2016 at 8:46 PM, Tom Herbert <tom@herbertland.com> wrote:
> > On Mon, Aug 29, 2016 at 8:55 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
> >> On Mon, Aug 29, 2016 at 05:59:26PM +0300, Tariq Toukan wrote:
> >>> Hi Brenden,
> >>>
> >>> The solution direction should be XDP specific that does not hurt the
> >>> regular flow.
> >> An rcu_read_lock is _already_ taken for _every_ packet. This is 1/64th of
> 
> In other words "let's add  new small speed bump, we already have
> plenty ahead, so why not slow down now anyway".
> 
> Every single new instruction hurts performance, in this case maybe you
> are right, maybe we won't feel any performance
> impact, but that doesn't mean it is ok to do this.
Actually, I will make a stronger assertion. Unless your .config contains
CONFIG_PREEMPT=y (not most distros) or something like DEBUG_ATOMIC_SLEEP
(to trigger PREEMPT_COUNT), the code in this patch will be a nop.
Therefore, adding the protections that you mention below will be
_slower_ than the code already proposed.
> 
> 
> >> that.
> >>>
> >>> On 26/08/2016 11:38 PM, Brenden Blanco wrote:
> >>> >Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
> >>> >freed despite the use of call_rcu inside bpf_prog_put. The situation is
> >>> >possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
> >>> >callback for destroying the bpf prog can run even during the bh handling
> >>> >in the mlx4 rx path.
> >>> >
> >>> >Several options were considered before this patch was settled on:
> >>> >
> >>> >Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
> >>> >of the rings are updated with the new program.
> >>> >This approach has the disadvantage that as the number of rings
> >>> >increases, the speed of udpate will slow down significantly due to
> >>> >napi_synchronize's msleep(1).
> >>> I prefer this option as it doesn't hurt the data path. A delay in a
> >>> control command can be tolerated.
> >>> >Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
> >>> >The action of the bpf_prog_put_bh would be to then call bpf_prog_put
> >>> >later. Those drivers that consume a bpf prog in a bh context (like mlx4)
> >>> >would then use the bpf_prog_put_bh instead when the ring is up. This has
> >>> >the problem of complexity, in maintaining proper refcnts and rcu lists,
> >>> >and would likely be harder to review. In addition, this approach to
> >>> >freeing must be exclusive with other frees of the bpf prog, for instance
> >>> >a _bh prog must not be referenced from a prog array that is consumed by
> >>> >a non-_bh prog.
> >>> >
> >>> >The placement of rcu_read_lock in this patch is functionally the same as
> >>> >putting an rcu_read_lock in napi_poll. Actually doing so could be a
> >>> >potentially controversial change, but would bring the implementation in
> >>> >line with sk_busy_loop (though of course the nature of those two paths
> >>> >is substantially different), and would also avoid future copy/paste
> >>> >problems with future supporters of XDP. Still, this patch does not take
> >>> >that opinionated option.
> >>> So you decided to add a lock for all non-XDP flows, which are 99% of
> >>> the cases.
> >>> We should avoid this.
> >> The whole point of rcu_read_lock architecture is to be taken in the fast
> >> path. There won't be a performance impact from this patch.
> >
> > +1, this is nothing at all like a spinlock and really this should be
> > just like any other rcu like access.
> >
> > Brenden, tracking down how the structure is freed needed a few steps,
> > please make sure the RCU requirements are well documented. Also, I'm
> > still not a fan of using xchg to set the program, seems that a lock
> > could be used in that path.
> >
> > Thanks,
> > Tom
> 
> Sorry folks I am with Tariq on this, you can't just add a single
> instruction which is only valid/needed for 1% of the use cases
> to the driver's general data path, even if it was as cheap as one cpu cycle!
How about 0?

$ diff mlx4_en.ko.norcu.s mlx4_en.ko.rcu.s | wc -l                                                                                                                                                                                                                           
0

> 
> Let me try to suggest something:
> instead of taking the rcu_read_lock for the whole
> mlx4_en_process_rx_cq, we can minimize to XDP code path only
> by double checking xdp_prog (non-protected check followed by a
> protected check inside mlx4 XDP critical path).
> 
> i.e instead of:
> 
> rcu_read_lock();
> 
> xdp_prog = ring->xdp_prog;
> 
> //__Do lots of non-XDP related stuff__
> 
> if (xdp_prog) {
>     //Do XDP magic ..
> }
> //__Do more of non-XDP related stuff__
> 
> rcu_read_unlock();
> 
> 
> We can minimize it to XDP critical path only:
> 
> //Non protected xdp_prog dereference.
> if (xdp_prog) {
>      rcu_read_lock();
>      //Protected dereference to ring->xdp_prog
>      xdp_prog = ring->xdp_prog;
>      if(unlikely(!xdp_prg)) goto unlock;

The addition of this branch and extra deref is now slowing down the xdp
path compared to the current proposal.

>     //Do XDP magic ..
> 
> unlock:
>      rcu_read_unlock();
> }
Saeed Mahameed Sept. 1, 2016, 10:59 p.m. UTC | #7
On Wed, Aug 31, 2016 at 4:50 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
> On Tue, Aug 30, 2016 at 12:35:58PM +0300, Saeed Mahameed wrote:
>> On Mon, Aug 29, 2016 at 8:46 PM, Tom Herbert <tom@herbertland.com> wrote:
>> > On Mon, Aug 29, 2016 at 8:55 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
>> >> On Mon, Aug 29, 2016 at 05:59:26PM +0300, Tariq Toukan wrote:
>> >>> Hi Brenden,
>> >>>
>> >>> The solution direction should be XDP specific that does not hurt the
>> >>> regular flow.
>> >> An rcu_read_lock is _already_ taken for _every_ packet. This is 1/64th of
>>
>> In other words "let's add  new small speed bump, we already have
>> plenty ahead, so why not slow down now anyway".
>>
>> Every single new instruction hurts performance, in this case maybe you
>> are right, maybe we won't feel any performance
>> impact, but that doesn't mean it is ok to do this.
> Actually, I will make a stronger assertion. Unless your .config contains
> CONFIG_PREEMPT=y (not most distros) or something like DEBUG_ATOMIC_SLEEP
> (to trigger PREEMPT_COUNT), the code in this patch will be a nop.
> Therefore, adding the protections that you mention below will be
> _slower_ than the code already proposed.
>>
>>
>> >> that.
>> >>>
>> >>> On 26/08/2016 11:38 PM, Brenden Blanco wrote:
>> >>> >Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
>> >>> >freed despite the use of call_rcu inside bpf_prog_put. The situation is
>> >>> >possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
>> >>> >callback for destroying the bpf prog can run even during the bh handling
>> >>> >in the mlx4 rx path.
>> >>> >
>> >>> >Several options were considered before this patch was settled on:
>> >>> >
>> >>> >Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
>> >>> >of the rings are updated with the new program.
>> >>> >This approach has the disadvantage that as the number of rings
>> >>> >increases, the speed of udpate will slow down significantly due to
>> >>> >napi_synchronize's msleep(1).
>> >>> I prefer this option as it doesn't hurt the data path. A delay in a
>> >>> control command can be tolerated.
>> >>> >Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
>> >>> >The action of the bpf_prog_put_bh would be to then call bpf_prog_put
>> >>> >later. Those drivers that consume a bpf prog in a bh context (like mlx4)
>> >>> >would then use the bpf_prog_put_bh instead when the ring is up. This has
>> >>> >the problem of complexity, in maintaining proper refcnts and rcu lists,
>> >>> >and would likely be harder to review. In addition, this approach to
>> >>> >freeing must be exclusive with other frees of the bpf prog, for instance
>> >>> >a _bh prog must not be referenced from a prog array that is consumed by
>> >>> >a non-_bh prog.
>> >>> >
>> >>> >The placement of rcu_read_lock in this patch is functionally the same as
>> >>> >putting an rcu_read_lock in napi_poll. Actually doing so could be a
>> >>> >potentially controversial change, but would bring the implementation in
>> >>> >line with sk_busy_loop (though of course the nature of those two paths
>> >>> >is substantially different), and would also avoid future copy/paste
>> >>> >problems with future supporters of XDP. Still, this patch does not take
>> >>> >that opinionated option.
>> >>> So you decided to add a lock for all non-XDP flows, which are 99% of
>> >>> the cases.
>> >>> We should avoid this.
>> >> The whole point of rcu_read_lock architecture is to be taken in the fast
>> >> path. There won't be a performance impact from this patch.
>> >
>> > +1, this is nothing at all like a spinlock and really this should be
>> > just like any other rcu like access.
>> >
>> > Brenden, tracking down how the structure is freed needed a few steps,
>> > please make sure the RCU requirements are well documented. Also, I'm
>> > still not a fan of using xchg to set the program, seems that a lock
>> > could be used in that path.
>> >
>> > Thanks,
>> > Tom
>>
>> Sorry folks I am with Tariq on this, you can't just add a single
>> instruction which is only valid/needed for 1% of the use cases
>> to the driver's general data path, even if it was as cheap as one cpu cycle!
> How about 0?
>
> $ diff mlx4_en.ko.norcu.s mlx4_en.ko.rcu.s | wc -l
> 0
>

Well, If you put it this way, it seems OK then.

Anyway I would add a friendly comment beside the rcu_read_lock that
"this is needed to protect
access to ring->xdp_prog".

>>
>> Let me try to suggest something:
>> instead of taking the rcu_read_lock for the whole
>> mlx4_en_process_rx_cq, we can minimize to XDP code path only
>> by double checking xdp_prog (non-protected check followed by a
>> protected check inside mlx4 XDP critical path).
>>
>> i.e instead of:
>>
>> rcu_read_lock();
>>
>> xdp_prog = ring->xdp_prog;
>>
>> //__Do lots of non-XDP related stuff__
>>
>> if (xdp_prog) {
>>     //Do XDP magic ..
>> }
>> //__Do more of non-XDP related stuff__
>>
>> rcu_read_unlock();
>>
>>
>> We can minimize it to XDP critical path only:
>>
>> //Non protected xdp_prog dereference.
>> if (xdp_prog) {
>>      rcu_read_lock();
>>      //Protected dereference to ring->xdp_prog
>>      xdp_prog = ring->xdp_prog;
>>      if(unlikely(!xdp_prg)) goto unlock;
>
> The addition of this branch and extra deref is now slowing down the xdp
> path compared to the current proposal.
>

Yep, but this is an unlikely condition and the critical code here is
much smaller and it is more clear that the rcu_read_lock here meant to
protect the ring->xdp_prog under this small xdp critical section in
comparison to your patch where it is held across the whole RX
function.
Tom Herbert Sept. 1, 2016, 11:30 p.m. UTC | #8
On Thu, Sep 1, 2016 at 3:59 PM, Saeed Mahameed
<saeedm@dev.mellanox.co.il> wrote:
> On Wed, Aug 31, 2016 at 4:50 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
>> On Tue, Aug 30, 2016 at 12:35:58PM +0300, Saeed Mahameed wrote:
>>> On Mon, Aug 29, 2016 at 8:46 PM, Tom Herbert <tom@herbertland.com> wrote:
>>> > On Mon, Aug 29, 2016 at 8:55 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
>>> >> On Mon, Aug 29, 2016 at 05:59:26PM +0300, Tariq Toukan wrote:
>>> >>> Hi Brenden,
>>> >>>
>>> >>> The solution direction should be XDP specific that does not hurt the
>>> >>> regular flow.
>>> >> An rcu_read_lock is _already_ taken for _every_ packet. This is 1/64th of
>>>
>>> In other words "let's add  new small speed bump, we already have
>>> plenty ahead, so why not slow down now anyway".
>>>
>>> Every single new instruction hurts performance, in this case maybe you
>>> are right, maybe we won't feel any performance
>>> impact, but that doesn't mean it is ok to do this.
>> Actually, I will make a stronger assertion. Unless your .config contains
>> CONFIG_PREEMPT=y (not most distros) or something like DEBUG_ATOMIC_SLEEP
>> (to trigger PREEMPT_COUNT), the code in this patch will be a nop.
>> Therefore, adding the protections that you mention below will be
>> _slower_ than the code already proposed.
>>>
>>>
>>> >> that.
>>> >>>
>>> >>> On 26/08/2016 11:38 PM, Brenden Blanco wrote:
>>> >>> >Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
>>> >>> >freed despite the use of call_rcu inside bpf_prog_put. The situation is
>>> >>> >possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
>>> >>> >callback for destroying the bpf prog can run even during the bh handling
>>> >>> >in the mlx4 rx path.
>>> >>> >
>>> >>> >Several options were considered before this patch was settled on:
>>> >>> >
>>> >>> >Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
>>> >>> >of the rings are updated with the new program.
>>> >>> >This approach has the disadvantage that as the number of rings
>>> >>> >increases, the speed of udpate will slow down significantly due to
>>> >>> >napi_synchronize's msleep(1).
>>> >>> I prefer this option as it doesn't hurt the data path. A delay in a
>>> >>> control command can be tolerated.
>>> >>> >Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
>>> >>> >The action of the bpf_prog_put_bh would be to then call bpf_prog_put
>>> >>> >later. Those drivers that consume a bpf prog in a bh context (like mlx4)
>>> >>> >would then use the bpf_prog_put_bh instead when the ring is up. This has
>>> >>> >the problem of complexity, in maintaining proper refcnts and rcu lists,
>>> >>> >and would likely be harder to review. In addition, this approach to
>>> >>> >freeing must be exclusive with other frees of the bpf prog, for instance
>>> >>> >a _bh prog must not be referenced from a prog array that is consumed by
>>> >>> >a non-_bh prog.
>>> >>> >
>>> >>> >The placement of rcu_read_lock in this patch is functionally the same as
>>> >>> >putting an rcu_read_lock in napi_poll. Actually doing so could be a
>>> >>> >potentially controversial change, but would bring the implementation in
>>> >>> >line with sk_busy_loop (though of course the nature of those two paths
>>> >>> >is substantially different), and would also avoid future copy/paste
>>> >>> >problems with future supporters of XDP. Still, this patch does not take
>>> >>> >that opinionated option.
>>> >>> So you decided to add a lock for all non-XDP flows, which are 99% of
>>> >>> the cases.
>>> >>> We should avoid this.
>>> >> The whole point of rcu_read_lock architecture is to be taken in the fast
>>> >> path. There won't be a performance impact from this patch.
>>> >
>>> > +1, this is nothing at all like a spinlock and really this should be
>>> > just like any other rcu like access.
>>> >
>>> > Brenden, tracking down how the structure is freed needed a few steps,
>>> > please make sure the RCU requirements are well documented. Also, I'm
>>> > still not a fan of using xchg to set the program, seems that a lock
>>> > could be used in that path.
>>> >
>>> > Thanks,
>>> > Tom
>>>
>>> Sorry folks I am with Tariq on this, you can't just add a single
>>> instruction which is only valid/needed for 1% of the use cases
>>> to the driver's general data path, even if it was as cheap as one cpu cycle!
>> How about 0?
>>
>> $ diff mlx4_en.ko.norcu.s mlx4_en.ko.rcu.s | wc -l
>> 0
>>
>
> Well, If you put it this way, it seems OK then.
>
> Anyway I would add a friendly comment beside the rcu_read_lock that
> "this is needed to protect
> access to ring->xdp_prog".
>
>>>
>>> Let me try to suggest something:
>>> instead of taking the rcu_read_lock for the whole
>>> mlx4_en_process_rx_cq, we can minimize to XDP code path only
>>> by double checking xdp_prog (non-protected check followed by a
>>> protected check inside mlx4 XDP critical path).
>>>
>>> i.e instead of:
>>>
>>> rcu_read_lock();
>>>
>>> xdp_prog = ring->xdp_prog;
>>>
>>> //__Do lots of non-XDP related stuff__
>>>
>>> if (xdp_prog) {
>>>     //Do XDP magic ..
>>> }
>>> //__Do more of non-XDP related stuff__
>>>
>>> rcu_read_unlock();
>>>
>>>
>>> We can minimize it to XDP critical path only:
>>>
>>> //Non protected xdp_prog dereference.
>>> if (xdp_prog) {
>>>      rcu_read_lock();
>>>      //Protected dereference to ring->xdp_prog
>>>      xdp_prog = ring->xdp_prog;
>>>      if(unlikely(!xdp_prg)) goto unlock;
>>
>> The addition of this branch and extra deref is now slowing down the xdp
>> path compared to the current proposal.
>>
>
> Yep, but this is an unlikely condition and the critical code here is
> much smaller and it is more clear that the rcu_read_lock here meant to
> protect the ring->xdp_prog under this small xdp critical section in
> comparison to your patch where it is held across the whole RX
> function.

Note that there is already an rcu_read_lock potentially per packet
buried in the function, if the whole function is under rcu_read_lock
then that can be removed.

Tom
Brenden Blanco Sept. 2, 2016, 5:50 p.m. UTC | #9
On Thu, Sep 01, 2016 at 04:30:28PM -0700, Tom Herbert wrote:
[...]
> > Yep, but this is an unlikely condition and the critical code here is
> > much smaller and it is more clear that the rcu_read_lock here meant to
> > protect the ring->xdp_prog under this small xdp critical section in
> > comparison to your patch where it is held across the whole RX
> > function.
> 
> Note that there is already an rcu_read_lock potentially per packet
> buried in the function, if the whole function is under rcu_read_lock
> then that can be removed.
Yes I was aware of that, I had left it as-is since: 1. it seemed to be
in an exception path and less performance sensitive to nested calls, and
2. in case some future developer removed the top-level rcu_read_lock,
the finer-grained one would have been unprotected if not code reviewed
carefully.

I'll instead add a note at the top pointing out the dual need for the
lock, to address both yours and Saeed's comments.

As a side note, when considering the idea of moving the rcu_read_lock to
a more generic location (napi), I had toyed with the idea of
benchmarking to see if removing the actually-fast-path use of
rcu_read_lock in netif_receive_skb_internal could have any performance
benefit for the universal use case (non-xdp). However, that seems
completely out of scope at the moment, and only beneficial for
non-standard (IMO) .configs, besides being much harder to review. It was
showing up in perf at about 1-2% overhead in preempt=y kernels.
> 
> Tom
Brenden Blanco Sept. 2, 2016, 6:01 p.m. UTC | #10
On Fri, Sep 02, 2016 at 01:59:40AM +0300, Saeed Mahameed wrote:
> On Wed, Aug 31, 2016 at 4:50 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
> > On Tue, Aug 30, 2016 at 12:35:58PM +0300, Saeed Mahameed wrote:
[...]
> >> Sorry folks I am with Tariq on this, you can't just add a single
> >> instruction which is only valid/needed for 1% of the use cases
> >> to the driver's general data path, even if it was as cheap as one cpu cycle!
> > How about 0?
> >
> > $ diff mlx4_en.ko.norcu.s mlx4_en.ko.rcu.s | wc -l
> > 0
> >
> 
> Well, If you put it this way, it seems OK then.
> 
> Anyway I would add a friendly comment beside the rcu_read_lock that
> "this is needed to protect
> access to ring->xdp_prog".

Thanks, I will go ahead with this then.

> 
> >>
> >> Let me try to suggest something:
> >> instead of taking the rcu_read_lock for the whole
> >> mlx4_en_process_rx_cq, we can minimize to XDP code path only
> >> by double checking xdp_prog (non-protected check followed by a
> >> protected check inside mlx4 XDP critical path).
> >>
> >> i.e instead of:
> >>
> >> rcu_read_lock();
> >>
> >> xdp_prog = ring->xdp_prog;
> >>
> >> //__Do lots of non-XDP related stuff__
> >>
> >> if (xdp_prog) {
> >>     //Do XDP magic ..
> >> }
> >> //__Do more of non-XDP related stuff__
> >>
> >> rcu_read_unlock();
> >>
> >>
> >> We can minimize it to XDP critical path only:
> >>
> >> //Non protected xdp_prog dereference.
> >> if (xdp_prog) {
> >>      rcu_read_lock();
> >>      //Protected dereference to ring->xdp_prog
> >>      xdp_prog = ring->xdp_prog;
> >>      if(unlikely(!xdp_prg)) goto unlock;
> >
> > The addition of this branch and extra deref is now slowing down the xdp
> > path compared to the current proposal.
> >
> 
> Yep, but this is an unlikely condition and the critical code here is
> much smaller and it is more clear that the rcu_read_lock here meant to
> protect the ring->xdp_prog under this small xdp critical section in
> comparison to your patch where it is held across the whole RX
> function.

It's really an improper use of RCU though.

RCU is meant to provide correctness without sacrificing any performance
in the fastpath. It is designed to avoid having to double-dereference
and other such tricks, so shouldn't we use it how it was designed?
Having a larger scoped rcu_read_lock doesn't hurt anybody here, but the
extra memory reads certainly _does_ impact the XDP path, which folks are
going to start relying on to be performant. Let's not start chipping
away at that.
Brenden Blanco Sept. 2, 2016, 6:13 p.m. UTC | #11
On Mon, Aug 29, 2016 at 10:46:38AM -0700, Tom Herbert wrote:
[...]
> Brenden, tracking down how the structure is freed needed a few steps,
> please make sure the RCU requirements are well documented. Also, I'm
Really? It's just bpf_prog_put->call_rcu(__bpf_prog_put_rcu). I suppose
what's missing is a general guideline for which functions new consumers
of bpf should use, but I wouldn't trust myself to write such holistic
documentation accurately (e.g. interacting with nmi probes and such).
> still not a fan of using xchg to set the program, seems that a lock
> could be used in that path.
Where would such a lock go? Everything in mlx4/en_netdev.c relies on
rtnl, which seems sufficient and obvious...adding some new field
specific lock would be distracting and unneeded.
> 
> Thanks,
> Tom
Tom Herbert Sept. 2, 2016, 7:14 p.m. UTC | #12
On Fri, Sep 2, 2016 at 11:13 AM, Brenden Blanco <bblanco@plumgrid.com> wrote:
> On Mon, Aug 29, 2016 at 10:46:38AM -0700, Tom Herbert wrote:
> [...]
>> Brenden, tracking down how the structure is freed needed a few steps,
>> please make sure the RCU requirements are well documented. Also, I'm
> Really? It's just bpf_prog_put->call_rcu(__bpf_prog_put_rcu). I suppose
> what's missing is a general guideline for which functions new consumers
> of bpf should use, but I wouldn't trust myself to write such holistic
> documentation accurately (e.g. interacting with nmi probes and such).
>> still not a fan of using xchg to set the program, seems that a lock
>> could be used in that path.
> Where would such a lock go? Everything in mlx4/en_netdev.c relies on
> rtnl, which seems sufficient and obvious...adding some new field
> specific lock would be distracting and unneeded.

Just use the same mutex_lock that is already being taken in case when
priv->xdp_ring_num != xdp_ring_num. Then use normal rcu functions to
dereference and assign pointers.

Tom

>>
>> Thanks,
>> Tom
diff mbox

Patch

diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 2040dad..efed546 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -800,6 +800,7 @@  int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
 	if (budget <= 0)
 		return polled;
 
+	rcu_read_lock();
 	xdp_prog = READ_ONCE(ring->xdp_prog);
 	doorbell_pending = 0;
 	tx_index = (priv->tx_ring_num - priv->xdp_ring_num) + cq->ring;
@@ -1077,6 +1078,7 @@  consumed:
 	}
 
 out:
+	rcu_read_unlock();
 	if (doorbell_pending)
 		mlx4_en_xmit_doorbell(priv->tx_ring[tx_index]);