diff mbox

net/mlx5_core/pagealloc: Remove deprecated create_singlethread_workqueue

Message ID 20160728081949.GA5561@Karyakshetra
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Bhaktipriya Shridhar July 28, 2016, 8:19 a.m. UTC
A dedicated workqueue has been used since the work items are being used
on a memory reclaim path. WQ_MEM_RECLAIM has been set to guarantee forward
progress under memory pressure.

The workqueue has a single work item. Hence, alloc_workqueue() is used
instead of alloc_ordered_workqueue() since ordering is unnecessary when
there's only one work item.

Explicit concurrency limit is unnecessary here since there are only a
fixed number of work items.

Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--
2.1.4

Comments

Leon Romanovsky July 28, 2016, 9:37 a.m. UTC | #1
On Thu, Jul 28, 2016 at 01:49:49PM +0530, Bhaktipriya Shridhar wrote:
> A dedicated workqueue has been used since the work items are being used
> on a memory reclaim path. WQ_MEM_RECLAIM has been set to guarantee forward
> progress under memory pressure.
> 
> The workqueue has a single work item. Hence, alloc_workqueue() is used
> instead of alloc_ordered_workqueue() since ordering is unnecessary when
> there's only one work item.
> 
> Explicit concurrency limit is unnecessary here since there are only a
> fixed number of work items.
> 
> Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)

Hi Bhaktipriya,

First of all, I would like to thank you for your work and invite you to
continue, but can you please submit ONE patch SERIES which changes all
similar places?

BTW,
Did you test this patch? Did you notice the memory reclaim path nature
of this work?

Thanks

> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> index 9eeee05..7c85262 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> @@ -552,7 +552,8 @@ void mlx5_pagealloc_cleanup(struct mlx5_core_dev *dev)
> 
>  int mlx5_pagealloc_start(struct mlx5_core_dev *dev)
>  {
> -	dev->priv.pg_wq = create_singlethread_workqueue("mlx5_page_allocator");
> +	dev->priv.pg_wq = alloc_workqueue("mlx5_page_allocator",
> +					  WQ_MEM_RECLAIM, 0);
>  	if (!dev->priv.pg_wq)
>  		return -ENOMEM;
> 
> --
> 2.1.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Saeed Mahameed July 28, 2016, 10:44 p.m. UTC | #2
On Thu, Jul 28, 2016 at 11:19 AM, Bhaktipriya Shridhar
<bhaktipriya96@gmail.com> wrote:
> A dedicated workqueue has been used since the work items are being used
> on a memory reclaim path. WQ_MEM_RECLAIM has been set to guarantee forward
> progress under memory pressure.
>
> The workqueue has a single work item. Hence, alloc_workqueue() is used
> instead of alloc_ordered_workqueue() since ordering is unnecessary when
> there's only one work item.

let's keep the current behavior (ST WQ) because at the moment we don't
know how this WQ will evolve in the future, the original author had
something in mind .. let's keep.

>
> Explicit concurrency limit is unnecessary here since there are only a
> fixed number of work items.
>
> Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> index 9eeee05..7c85262 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> @@ -552,7 +552,8 @@ void mlx5_pagealloc_cleanup(struct mlx5_core_dev *dev)
>
>  int mlx5_pagealloc_start(struct mlx5_core_dev *dev)
>  {
> -       dev->priv.pg_wq = create_singlethread_workqueue("mlx5_page_allocator");
> +       dev->priv.pg_wq = alloc_workqueue("mlx5_page_allocator",
> +                                         WQ_MEM_RECLAIM, 0);
>         if (!dev->priv.pg_wq)
>                 return -ENOMEM;
>
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Saeed Mahameed July 28, 2016, 10:45 p.m. UTC | #3
On Thu, Jul 28, 2016 at 12:37 PM, Leon Romanovsky <leonro@mellanox.com> wrote:
> On Thu, Jul 28, 2016 at 01:49:49PM +0530, Bhaktipriya Shridhar wrote:
>> A dedicated workqueue has been used since the work items are being used
>> on a memory reclaim path. WQ_MEM_RECLAIM has been set to guarantee forward
>> progress under memory pressure.
>>
>> The workqueue has a single work item. Hence, alloc_workqueue() is used
>> instead of alloc_ordered_workqueue() since ordering is unnecessary when
>> there's only one work item.
>>
>> Explicit concurrency limit is unnecessary here since there are only a
>> fixed number of work items.
>>
>> Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
>> ---
>>  drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> Hi Bhaktipriya,
>
> First of all, I would like to thank you for your work and invite you to
> continue, but can you please submit ONE patch SERIES which changes all
> similar places?
>

I agree with Leon, please push one series for all mlx5 patches and add
some explanation in the cover letter regarding the motivation of this
work.

> BTW,
> Did you test this patch? Did you notice the memory reclaim path nature
> of this work?
>
> Thanks
>
>>
>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>> index 9eeee05..7c85262 100644
>> --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>> @@ -552,7 +552,8 @@ void mlx5_pagealloc_cleanup(struct mlx5_core_dev *dev)
>>
>>  int mlx5_pagealloc_start(struct mlx5_core_dev *dev)
>>  {
>> -     dev->priv.pg_wq = create_singlethread_workqueue("mlx5_page_allocator");
>> +     dev->priv.pg_wq = alloc_workqueue("mlx5_page_allocator",
>> +                                       WQ_MEM_RECLAIM, 0);
>>       if (!dev->priv.pg_wq)
>>               return -ENOMEM;
>>
>> --
>> 2.1.4
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tejun Heo July 29, 2016, 12:22 p.m. UTC | #4
Hello,

On Thu, Jul 28, 2016 at 12:37:35PM +0300, Leon Romanovsky wrote:
> Did you test this patch? Did you notice the memory reclaim path nature
> of this work?

The conversion uses WQ_MEM_RECLAIM, which is standard for all
workqueues which can stall packet processing if stalled.  The
requirement comes from nfs or block devices over network.

Thanks.
Leon Romanovsky July 31, 2016, 6:35 a.m. UTC | #5
On Fri, Jul 29, 2016 at 08:22:37AM -0400, Tejun Heo wrote:
> Hello,
> 
> On Thu, Jul 28, 2016 at 12:37:35PM +0300, Leon Romanovsky wrote:
> > Did you test this patch? Did you notice the memory reclaim path nature
> > of this work?
> 
> The conversion uses WQ_MEM_RECLAIM, which is standard for all
> workqueues which can stall packet processing if stalled.  The
> requirement comes from nfs or block devices over network.

The title stays "remove deprecated create_singlethread_workqueue" and if
we put aside the word "deprecated", the code moves from ordered
workqueue to unordered workqueue and changes max_active count which
isn't expressed in commit message.

For reclaim paths, I want to be extra caution and want to see
justification for doing that along with understanding if it is tested or
not.

Right now, I'm feeling that I'm participating in soapie where one sends
patch for every line, waits and sends the same patch for another file.
It is worth to send one patch set and let us to test it all in once.

Thanks.

> 
> Thanks.
> 
> -- 
> tejun
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Leon Romanovsky Aug. 2, 2016, 5:56 a.m. UTC | #6
On Mon, Aug 01, 2016 at 11:11:19AM -0400, Tejun Heo wrote:
> > Right now, I'm feeling that I'm participating in soapie where one sends
> > patch for every line, waits and sends the same patch for another file.
> > It is worth to send one patch set and let us to test it all in once.
> 
> Yeah, I guess it can be a bit annoying.  Bhaktipriya, can you please
> group patches for meallnox?

Please do.
Thanks.

> 
> Thanks.
> 
> -- 
> tejun
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
index 9eeee05..7c85262 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
@@ -552,7 +552,8 @@  void mlx5_pagealloc_cleanup(struct mlx5_core_dev *dev)

 int mlx5_pagealloc_start(struct mlx5_core_dev *dev)
 {
-	dev->priv.pg_wq = create_singlethread_workqueue("mlx5_page_allocator");
+	dev->priv.pg_wq = alloc_workqueue("mlx5_page_allocator",
+					  WQ_MEM_RECLAIM, 0);
 	if (!dev->priv.pg_wq)
 		return -ENOMEM;