diff mbox series

答复: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE condition

Message ID a5dea60221d84886991168781361b591@baidu.com
State RFC
Delegated to: David Ahern
Headers show
Series 答复: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE condition | expand

Commit Message

Li RongQing Dec. 16, 2019, 4:02 a.m. UTC
> -----邮件原件-----
> 发件人: Yunsheng Lin [mailto:linyunsheng@huawei.com]
> 发送时间: 2019年12月16日 9:51
> 收件人: Jesper Dangaard Brouer <brouer@redhat.com>
> 抄送: Li,Rongqing <lirongqing@baidu.com>; Saeed Mahameed
> <saeedm@mellanox.com>; ilias.apalodimas@linaro.org;
> jonathan.lemon@gmail.com; netdev@vger.kernel.org; mhocko@kernel.org;
> peterz@infradead.org; Greg Kroah-Hartman <gregkh@linuxfoundation.org>;
> bhelgaas@google.com; linux-kernel@vger.kernel.org; Björn Töpel
> <bjorn.topel@intel.com>
> 主题: Re: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE
> condition
> 
> On 2019/12/13 16:48, Jesper Dangaard Brouer wrote:> You are basically saying
> that the NUMA check should be moved to
> > allocation time, as it is running the RX-CPU (NAPI).  And eventually
> > after some time the pages will come from correct NUMA node.
> >
> > I think we can do that, and only affect the semi-fast-path.
> > We just need to handle that pages in the ptr_ring that are recycled
> > can be from the wrong NUMA node.  In __page_pool_get_cached() when
> > consuming pages from the ptr_ring (__ptr_ring_consume_batched), then
> > we can evict pages from wrong NUMA node.
> 
> Yes, that's workable.
> 
> >
> > For the pool->alloc.cache we either accept, that it will eventually
> > after some time be emptied (it is only in a 100% XDP_DROP workload that
> > it will continue to reuse same pages).   Or we simply clear the
> > pool->alloc.cache when calling page_pool_update_nid().
> 
> Simply clearing the pool->alloc.cache when calling page_pool_update_nid()
> seems better.
> 

How about the below codes, the driver can configure p.nid to any, which will be adjusted in NAPI polling, irq migration will not be problem, but it will add a check into hot path.

Thanks

-Li
> >

Comments

Ilias Apalodimas Dec. 16, 2019, 10:13 a.m. UTC | #1
On Mon, Dec 16, 2019 at 04:02:04AM +0000, Li,Rongqing wrote:
> 
> 
> > -----邮件原件-----
> > 发件人: Yunsheng Lin [mailto:linyunsheng@huawei.com]
> > 发送时间: 2019年12月16日 9:51
> > 收件人: Jesper Dangaard Brouer <brouer@redhat.com>
> > 抄送: Li,Rongqing <lirongqing@baidu.com>; Saeed Mahameed
> > <saeedm@mellanox.com>; ilias.apalodimas@linaro.org;
> > jonathan.lemon@gmail.com; netdev@vger.kernel.org; mhocko@kernel.org;
> > peterz@infradead.org; Greg Kroah-Hartman <gregkh@linuxfoundation.org>;
> > bhelgaas@google.com; linux-kernel@vger.kernel.org; Björn Töpel
> > <bjorn.topel@intel.com>
> > 主题: Re: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE
> > condition
> > 
> > On 2019/12/13 16:48, Jesper Dangaard Brouer wrote:> You are basically saying
> > that the NUMA check should be moved to
> > > allocation time, as it is running the RX-CPU (NAPI).  And eventually
> > > after some time the pages will come from correct NUMA node.
> > >
> > > I think we can do that, and only affect the semi-fast-path.
> > > We just need to handle that pages in the ptr_ring that are recycled
> > > can be from the wrong NUMA node.  In __page_pool_get_cached() when
> > > consuming pages from the ptr_ring (__ptr_ring_consume_batched), then
> > > we can evict pages from wrong NUMA node.
> > 
> > Yes, that's workable.
> > 
> > >
> > > For the pool->alloc.cache we either accept, that it will eventually
> > > after some time be emptied (it is only in a 100% XDP_DROP workload that
> > > it will continue to reuse same pages).   Or we simply clear the
> > > pool->alloc.cache when calling page_pool_update_nid().
> > 
> > Simply clearing the pool->alloc.cache when calling page_pool_update_nid()
> > seems better.
> > 
> 
> How about the below codes, the driver can configure p.nid to any, which will be adjusted in NAPI polling, irq migration will not be problem, but it will add a check into hot path.

We'll have to check the impact on some high speed (i.e 100gbit) interface
between doing anything like that. Saeed's current patch runs once per NAPI. This
runs once per packet. The load might be measurable. 
The READ_ONCE is needed in case all producers/consumers run on the same CPU
right?


Thanks
/Ilias
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index a6aefe989043..4374a6239d17 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -108,6 +108,10 @@ static struct page *__page_pool_get_cached(struct page_pool *pool)
>                 if (likely(pool->alloc.count)) {
>                         /* Fast-path */
>                         page = pool->alloc.cache[--pool->alloc.count];
> +
> +                       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
> +                               WRITE_ONCE(pool->p.nid, numa_mem_id());
> +
>                         return page;
>                 }
>                 refill = true;
> @@ -155,6 +159,10 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
>         if (pool->p.order)
>                 gfp |= __GFP_COMP;
>  
> +
> +       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
> +               WRITE_ONCE(pool->p.nid, numa_mem_id());
> +
>         /* FUTURE development:
>          *
>          * Current slow-path essentially falls back to single page
> Thanks
> 
> -Li
> > >
>
Ilias Apalodimas Dec. 16, 2019, 10:16 a.m. UTC | #2
> > > 
> > > Simply clearing the pool->alloc.cache when calling page_pool_update_nid()
> > > seems better.
> > > 
> > 
> > How about the below codes, the driver can configure p.nid to any, which will be adjusted in NAPI polling, irq migration will not be problem, but it will add a check into hot path.
> 
> We'll have to check the impact on some high speed (i.e 100gbit) interface
> between doing anything like that. Saeed's current patch runs once per NAPI. This
> runs once per packet. The load might be measurable. 
> The READ_ONCE is needed in case all producers/consumers run on the same CPU

I meant different cpus!

> right?
> 
> 
> Thanks
> /Ilias
> > 
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index a6aefe989043..4374a6239d17 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -108,6 +108,10 @@ static struct page *__page_pool_get_cached(struct page_pool *pool)
> >                 if (likely(pool->alloc.count)) {
> >                         /* Fast-path */
> >                         page = pool->alloc.cache[--pool->alloc.count];
> > +
> > +                       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
> > +                               WRITE_ONCE(pool->p.nid, numa_mem_id());
> > +
> >                         return page;
> >                 }
> >                 refill = true;
> > @@ -155,6 +159,10 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> >         if (pool->p.order)
> >                 gfp |= __GFP_COMP;
> >  
> > +
> > +       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
> > +               WRITE_ONCE(pool->p.nid, numa_mem_id());
> > +
> >         /* FUTURE development:
> >          *
> >          * Current slow-path essentially falls back to single page
> > Thanks
> > 
> > -Li
> > > >
> >
Li RongQing Dec. 16, 2019, 10:57 a.m. UTC | #3
> -----邮件原件-----
> 发件人: Ilias Apalodimas [mailto:ilias.apalodimas@linaro.org]
> 发送时间: 2019年12月16日 18:17
> 收件人: Li,Rongqing <lirongqing@baidu.com>
> 抄送: Yunsheng Lin <linyunsheng@huawei.com>; Jesper Dangaard Brouer
> <brouer@redhat.com>; Saeed Mahameed <saeedm@mellanox.com>;
> jonathan.lemon@gmail.com; netdev@vger.kernel.org; mhocko@kernel.org;
> peterz@infradead.org; Greg Kroah-Hartman <gregkh@linuxfoundation.org>;
> bhelgaas@google.com; linux-kernel@vger.kernel.org; Björn Töpel
> <bjorn.topel@intel.com>
> 主题: Re: 答复: [PATCH][v2] page_pool: handle page recycle for
> NUMA_NO_NODE condition
> 
> > > >
> > > > Simply clearing the pool->alloc.cache when calling
> > > > page_pool_update_nid() seems better.
> > > >
> > >
> > > How about the below codes, the driver can configure p.nid to any, which will
> be adjusted in NAPI polling, irq migration will not be problem, but it will add a
> check into hot path.
> >
> > We'll have to check the impact on some high speed (i.e 100gbit)
> > interface between doing anything like that. Saeed's current patch runs
> > once per NAPI. This runs once per packet. The load might be measurable.
> > The READ_ONCE is needed in case all producers/consumers run on the
> > same CPU
> 
> I meant different cpus!
> 

If no READ_ONCE, pool->p.nid will be always written and become dirty although it is unshared by multiple cpus

See Eric' patch:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=503978aca46124cd714703e180b9c8292ba50ba7

-Li 
> > right?
> >
> >
> > Thanks
> > /Ilias
> > >
> > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c index
> > > a6aefe989043..4374a6239d17 100644
> > > --- a/net/core/page_pool.c
> > > +++ b/net/core/page_pool.c
> > > @@ -108,6 +108,10 @@ static struct page
> *__page_pool_get_cached(struct page_pool *pool)
> > >                 if (likely(pool->alloc.count)) {
> > >                         /* Fast-path */
> > >                         page =
> > > pool->alloc.cache[--pool->alloc.count];
> > > +
> > > +                       if (unlikely(READ_ONCE(pool->p.nid) !=
> numa_mem_id()))
> > > +                               WRITE_ONCE(pool->p.nid,
> > > + numa_mem_id());
> > > +
> > >                         return page;
> > >                 }
> > >                 refill = true;
> > > @@ -155,6 +159,10 @@ static struct page
> *__page_pool_alloc_pages_slow(struct page_pool *pool,
> > >         if (pool->p.order)
> > >                 gfp |= __GFP_COMP;
> > >
> > > +
> > > +       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
> > > +               WRITE_ONCE(pool->p.nid, numa_mem_id());
> > > +
> > >         /* FUTURE development:
> > >          *
> > >          * Current slow-path essentially falls back to single page
> > > Thanks
> > >
> > > -Li
> > > > >
> > >
Saeed Mahameed Dec. 17, 2019, 7:38 p.m. UTC | #4
On Mon, 2019-12-16 at 12:13 +0200, Ilias Apalodimas wrote:
> On Mon, Dec 16, 2019 at 04:02:04AM +0000, Li,Rongqing wrote:
> > 
> > > -----邮件原件-----
> > > 发件人: Yunsheng Lin [mailto:linyunsheng@huawei.com]
> > > 发送时间: 2019年12月16日 9:51
> > > 收件人: Jesper Dangaard Brouer <brouer@redhat.com>
> > > 抄送: Li,Rongqing <lirongqing@baidu.com>; Saeed Mahameed
> > > <saeedm@mellanox.com>; ilias.apalodimas@linaro.org;
> > > jonathan.lemon@gmail.com; netdev@vger.kernel.org; 
> > > mhocko@kernel.org;
> > > peterz@infradead.org; Greg Kroah-Hartman <
> > > gregkh@linuxfoundation.org>;
> > > bhelgaas@google.com; linux-kernel@vger.kernel.org; Björn Töpel
> > > <bjorn.topel@intel.com>
> > > 主题: Re: [PATCH][v2] page_pool: handle page recycle for
> > > NUMA_NO_NODE
> > > condition
> > > 
> > > On 2019/12/13 16:48, Jesper Dangaard Brouer wrote:> You are
> > > basically saying
> > > that the NUMA check should be moved to
> > > > allocation time, as it is running the RX-CPU (NAPI).  And
> > > > eventually
> > > > after some time the pages will come from correct NUMA node.
> > > > 
> > > > I think we can do that, and only affect the semi-fast-path.
> > > > We just need to handle that pages in the ptr_ring that are
> > > > recycled
> > > > can be from the wrong NUMA node.  In __page_pool_get_cached()
> > > > when
> > > > consuming pages from the ptr_ring (__ptr_ring_consume_batched),
> > > > then
> > > > we can evict pages from wrong NUMA node.
> > > 
> > > Yes, that's workable.
> > > 
> > > > For the pool->alloc.cache we either accept, that it will
> > > > eventually
> > > > after some time be emptied (it is only in a 100% XDP_DROP
> > > > workload that
> > > > it will continue to reuse same pages).   Or we simply clear the
> > > > pool->alloc.cache when calling page_pool_update_nid().
> > > 
> > > Simply clearing the pool->alloc.cache when calling
> > > page_pool_update_nid()
> > > seems better.
> > > 
> > 
> > How about the below codes, the driver can configure p.nid to any,
> > which will be adjusted in NAPI polling, irq migration will not be
> > problem, but it will add a check into hot path.
> 
> We'll have to check the impact on some high speed (i.e 100gbit)
> interface
> between doing anything like that. Saeed's current patch runs once per
> NAPI. This
> runs once per packet. The load might be measurable. 
> The READ_ONCE is needed in case all producers/consumers run on the
> same CPU
> right?
> 

I agree with Illias, and as i explained this will make the pool biased
to cpu close only, and we want to avoid this,

Li, can you please check if this fixes your issue:

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index a6aefe989043..00c99282a306 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -28,6 +28,9 @@ static int page_pool_init(struct page_pool *pool,
 
        memcpy(&pool->p, params, sizeof(pool->p));
 
+       /* overwrite to allow recycling.. */
+       if (pool->p.nid == NUMA_NO_NODE) 
+               pool->p.nid = numa_mem_id(); 
+

if user wants dev_to_node() then use can use dev_to_node() on pool
initialization rather than NUMA_NO_NODE.


> Thanks
> /Ilias
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index a6aefe989043..4374a6239d17 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -108,6 +108,10 @@ static struct page
> > *__page_pool_get_cached(struct page_pool *pool)
> >                 if (likely(pool->alloc.count)) {
> >                         /* Fast-path */
> >                         page = pool->alloc.cache[--pool-
> > >alloc.count];
> > +
> > +                       if (unlikely(READ_ONCE(pool->p.nid) !=
> > numa_mem_id()))
> > +                               WRITE_ONCE(pool->p.nid,
> > numa_mem_id());
> > +
> >                         return page;
> >                 }
> >                 refill = true;
> > @@ -155,6 +159,10 @@ static struct page
> > *__page_pool_alloc_pages_slow(struct page_pool *pool,
> >         if (pool->p.order)
> >                 gfp |= __GFP_COMP;
> >  
> > +
> > +       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
> > +               WRITE_ONCE(pool->p.nid, numa_mem_id());
> > +
> >         /* FUTURE development:
> >          *
> >          * Current slow-path essentially falls back to single page
> > Thanks
> > 
> > -Li
diff mbox series

Patch

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index a6aefe989043..4374a6239d17 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -108,6 +108,10 @@  static struct page *__page_pool_get_cached(struct page_pool *pool)
                if (likely(pool->alloc.count)) {
                        /* Fast-path */
                        page = pool->alloc.cache[--pool->alloc.count];
+
+                       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
+                               WRITE_ONCE(pool->p.nid, numa_mem_id());
+
                        return page;
                }
                refill = true;
@@ -155,6 +159,10 @@  static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
        if (pool->p.order)
                gfp |= __GFP_COMP;
 
+
+       if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id()))
+               WRITE_ONCE(pool->p.nid, numa_mem_id());
+
        /* FUTURE development:
         *
         * Current slow-path essentially falls back to single page