[net-next,2/2] pfifo_fast: drop unneeded additional lock on dequeue

Message ID 8a1740148995663939837bedb14f29716c7cf6f5.1526392746.git.pabeni@redhat.com
State Accepted
Delegated to: David Miller
Headers show
Series
  • sched: refactor NOLOCK qdiscs
Related show

Commit Message

Paolo Abeni May 15, 2018, 2:24 p.m.
After the previous patch, for NOLOCK qdiscs, q->seqlock is
always held when the dequeue() is invoked, we can drop
any additional locking to protect such operation.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 include/linux/skb_array.h | 5 +++++
 net/sched/sch_generic.c   | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

Comments

Michael S. Tsirkin May 15, 2018, 8:17 p.m. | #1
On Tue, May 15, 2018 at 04:24:37PM +0200, Paolo Abeni wrote:
> After the previous patch, for NOLOCK qdiscs, q->seqlock is
> always held when the dequeue() is invoked, we can drop
> any additional locking to protect such operation.
> 
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
>  include/linux/skb_array.h | 5 +++++
>  net/sched/sch_generic.c   | 4 ++--
>  2 files changed, 7 insertions(+), 2 deletions(-)

Is the seqlock taken during qdisc_change_tx_queue_len?
We need to prevent that racing with dequeue.

> diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
> index a6b6e8bb3d7b..62d9b0a6329f 100644
> --- a/include/linux/skb_array.h
> +++ b/include/linux/skb_array.h
> @@ -97,6 +97,11 @@ static inline bool skb_array_empty_any(struct skb_array *a)
>  	return ptr_ring_empty_any(&a->ring);
>  }
>  
> +static inline struct sk_buff *__skb_array_consume(struct skb_array *a)
> +{
> +	return __ptr_ring_consume(&a->ring);
> +}
> +
>  static inline struct sk_buff *skb_array_consume(struct skb_array *a)
>  {
>  	return ptr_ring_consume(&a->ring);
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index a126f16bc30b..760ab1b09f8b 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -656,7 +656,7 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
>  		if (__skb_array_empty(q))
>  			continue;
>  
> -		skb = skb_array_consume_bh(q);
> +		skb = __skb_array_consume(q);
>  	}
>  	if (likely(skb)) {
>  		qdisc_qstats_cpu_backlog_dec(qdisc, skb);
> @@ -697,7 +697,7 @@ static void pfifo_fast_reset(struct Qdisc *qdisc)
>  		if (!q->ring.queue)
>  			continue;
>  
> -		while ((skb = skb_array_consume_bh(q)) != NULL)
> +		while ((skb = __skb_array_consume(q)) != NULL)
>  			kfree_skb(skb);
>  	}
>  
> -- 
> 2.14.3
Paolo Abeni May 16, 2018, 7:56 a.m. | #2
On Tue, 2018-05-15 at 23:17 +0300, Michael S. Tsirkin wrote:
> On Tue, May 15, 2018 at 04:24:37PM +0200, Paolo Abeni wrote:
> > After the previous patch, for NOLOCK qdiscs, q->seqlock is
> > always held when the dequeue() is invoked, we can drop
> > any additional locking to protect such operation.
> > 
> > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > ---
> >  include/linux/skb_array.h | 5 +++++
> >  net/sched/sch_generic.c   | 4 ++--
> >  2 files changed, 7 insertions(+), 2 deletions(-)
> 
> Is the seqlock taken during qdisc_change_tx_queue_len?
> We need to prevent that racing with dequeue.

Thanks for the head-up! I missed that code-path.

I'll add the lock in qdisc_change_tx_queue_len() in v2.

Thanks you,

Paolo
Paolo Abeni May 16, 2018, 9:57 a.m. | #3
On Wed, 2018-05-16 at 09:56 +0200, Paolo Abeni wrote:
> On Tue, 2018-05-15 at 23:17 +0300, Michael S. Tsirkin wrote:
> > On Tue, May 15, 2018 at 04:24:37PM +0200, Paolo Abeni wrote:
> > > After the previous patch, for NOLOCK qdiscs, q->seqlock is
> > > always held when the dequeue() is invoked, we can drop
> > > any additional locking to protect such operation.
> > > 
> > > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > > ---
> > >  include/linux/skb_array.h | 5 +++++
> > >  net/sched/sch_generic.c   | 4 ++--
> > >  2 files changed, 7 insertions(+), 2 deletions(-)
> > 
> > Is the seqlock taken during qdisc_change_tx_queue_len?
> > We need to prevent that racing with dequeue.
> 
> Thanks for the head-up! I missed that code-path.
> 
> I'll add the lock in qdisc_change_tx_queue_len() in v2.

Actually the lock is not needed in qdisc_change_tx_queue_len(): the
device is deactivated before calling ops->change_tx_queue_len, so the
latter can't race with ops->dequeue().

I think the current patch is safe.

Cheers,

Paolo
Michael S. Tsirkin May 16, 2018, 2:24 p.m. | #4
On Wed, May 16, 2018 at 09:56:16AM +0200, Paolo Abeni wrote:
> On Tue, 2018-05-15 at 23:17 +0300, Michael S. Tsirkin wrote:
> > On Tue, May 15, 2018 at 04:24:37PM +0200, Paolo Abeni wrote:
> > > After the previous patch, for NOLOCK qdiscs, q->seqlock is
> > > always held when the dequeue() is invoked, we can drop
> > > any additional locking to protect such operation.
> > > 
> > > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > > ---
> > >  include/linux/skb_array.h | 5 +++++
> > >  net/sched/sch_generic.c   | 4 ++--
> > >  2 files changed, 7 insertions(+), 2 deletions(-)
> > 
> > Is the seqlock taken during qdisc_change_tx_queue_len?
> > We need to prevent that racing with dequeue.
> 
> Thanks for the head-up! I missed that code-path.
> 
> I'll add the lock in qdisc_change_tx_queue_len() in v2.
> 
> Thanks you,
> 
> Paolo

When you do, make sure locks nest consistently.

Patch

diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index a6b6e8bb3d7b..62d9b0a6329f 100644
--- a/include/linux/skb_array.h
+++ b/include/linux/skb_array.h
@@ -97,6 +97,11 @@  static inline bool skb_array_empty_any(struct skb_array *a)
 	return ptr_ring_empty_any(&a->ring);
 }
 
+static inline struct sk_buff *__skb_array_consume(struct skb_array *a)
+{
+	return __ptr_ring_consume(&a->ring);
+}
+
 static inline struct sk_buff *skb_array_consume(struct skb_array *a)
 {
 	return ptr_ring_consume(&a->ring);
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index a126f16bc30b..760ab1b09f8b 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -656,7 +656,7 @@  static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
 		if (__skb_array_empty(q))
 			continue;
 
-		skb = skb_array_consume_bh(q);
+		skb = __skb_array_consume(q);
 	}
 	if (likely(skb)) {
 		qdisc_qstats_cpu_backlog_dec(qdisc, skb);
@@ -697,7 +697,7 @@  static void pfifo_fast_reset(struct Qdisc *qdisc)
 		if (!q->ring.queue)
 			continue;
 
-		while ((skb = skb_array_consume_bh(q)) != NULL)
+		while ((skb = __skb_array_consume(q)) != NULL)
 			kfree_skb(skb);
 	}