diff mbox

[1/2] mm: Replace nr_node_ids for loop with for_each_node in list lru

Message ID 1441737107-23103-2-git-send-email-raghavendra.kt@linux.vnet.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

Raghavendra K T Sept. 8, 2015, 6:31 p.m. UTC
The functions used in the patch are in slowpath, which gets called
whenever alloc_super is called during mounts.

Though this should not make difference for the architectures with
sequential numa node ids, for the powerpc which can potentially have
sparse node ids (for e.g., 4 node system having numa ids, 0,1,16,17
is common), this patch saves some unnecessary allocations for
non existing numa nodes.

Even without that saving, perhaps patch makes code more readable.

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 mm/list_lru.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

Comments

Vladimir Davydov Sept. 14, 2015, 9 a.m. UTC | #1
Hi,

On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
> The functions used in the patch are in slowpath, which gets called
> whenever alloc_super is called during mounts.
> 
> Though this should not make difference for the architectures with
> sequential numa node ids, for the powerpc which can potentially have
> sparse node ids (for e.g., 4 node system having numa ids, 0,1,16,17
> is common), this patch saves some unnecessary allocations for
> non existing numa nodes.
> 
> Even without that saving, perhaps patch makes code more readable.

Do I understand correctly that node 0 must always be in
node_possible_map? I ask, because we currently test
lru->node[0].memcg_lrus to determine if the list is memcg aware.

> 
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  mm/list_lru.c | 23 +++++++++++++++--------
>  1 file changed, 15 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 909eca2..5a97f83 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -377,7 +377,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
>  {
>  	int i;
>  
> -	for (i = 0; i < nr_node_ids; i++) {
> +	for_each_node(i) {
>  		if (!memcg_aware)
>  			lru->node[i].memcg_lrus = NULL;

So, we don't explicitly initialize memcg_lrus for nodes that are not in
node_possible_map. That's OK, because we allocate lru->node using
kzalloc. However, this partial nullifying in case !memcg_aware looks
confusing IMO. Let's drop it, I mean something like this:

static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
{
	int i;

	if (!memcg_aware)
		return 0;

	for_each_node(i) {
		if (memcg_init_list_lru_node(&lru->node[i]))
			goto fail;
	}

Thanks,
Vladimir
Raghavendra K T Sept. 14, 2015, 11:39 a.m. UTC | #2
On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
> Hi,
>
> On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
>> The functions used in the patch are in slowpath, which gets called
>> whenever alloc_super is called during mounts.
>>
>> Though this should not make difference for the architectures with
>> sequential numa node ids, for the powerpc which can potentially have
>> sparse node ids (for e.g., 4 node system having numa ids, 0,1,16,17
>> is common), this patch saves some unnecessary allocations for
>> non existing numa nodes.
>>
>> Even without that saving, perhaps patch makes code more readable.
>
> Do I understand correctly that node 0 must always be in
> node_possible_map? I ask, because we currently test
> lru->node[0].memcg_lrus to determine if the list is memcg aware.
>

Yes, node 0 is always there. So it should not be a problem.

>>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   mm/list_lru.c | 23 +++++++++++++++--------
>>   1 file changed, 15 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/list_lru.c b/mm/list_lru.c
>> index 909eca2..5a97f83 100644
>> --- a/mm/list_lru.c
>> +++ b/mm/list_lru.c
>> @@ -377,7 +377,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
>>   {
>>   	int i;
>>
>> -	for (i = 0; i < nr_node_ids; i++) {
>> +	for_each_node(i) {
>>   		if (!memcg_aware)
>>   			lru->node[i].memcg_lrus = NULL;
>
> So, we don't explicitly initialize memcg_lrus for nodes that are not in
> node_possible_map. That's OK, because we allocate lru->node using
> kzalloc. However, this partial nullifying in case !memcg_aware looks
> confusing IMO. Let's drop it, I mean something like this:

Yes, you are right. and we do not have to have memcg_aware check inside
for loop too.
Will change as per your suggestion and send V2.
Thanks for the review.

>
> static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
> {
> 	int i;
>
> 	if (!memcg_aware)
> 		return 0;
>
> 	for_each_node(i) {
> 		if (memcg_init_list_lru_node(&lru->node[i]))
> 			goto fail;
> 	}
>
Vladimir Davydov Sept. 14, 2015, 12:04 p.m. UTC | #3
On Mon, Sep 14, 2015 at 05:09:31PM +0530, Raghavendra K T wrote:
> On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
> >On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
> >>The functions used in the patch are in slowpath, which gets called
> >>whenever alloc_super is called during mounts.
> >>
> >>Though this should not make difference for the architectures with
> >>sequential numa node ids, for the powerpc which can potentially have
> >>sparse node ids (for e.g., 4 node system having numa ids, 0,1,16,17
> >>is common), this patch saves some unnecessary allocations for
> >>non existing numa nodes.
> >>
> >>Even without that saving, perhaps patch makes code more readable.
> >
> >Do I understand correctly that node 0 must always be in
> >node_possible_map? I ask, because we currently test
> >lru->node[0].memcg_lrus to determine if the list is memcg aware.
> >
> 
> Yes, node 0 is always there. So it should not be a problem.

I think it should be mentioned in the comment to list_lru_memcg_aware
then.

Thanks,
Vladimir
Raghavendra K T Sept. 14, 2015, 1:05 p.m. UTC | #4
On 09/14/2015 05:34 PM, Vladimir Davydov wrote:
> On Mon, Sep 14, 2015 at 05:09:31PM +0530, Raghavendra K T wrote:
>> On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
>>> On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
>>>> The functions used in the patch are in slowpath, which gets called
>>>> whenever alloc_super is called during mounts.
>>>>
>>>> Though this should not make difference for the architectures with
>>>> sequential numa node ids, for the powerpc which can potentially have
>>>> sparse node ids (for e.g., 4 node system having numa ids, 0,1,16,17
>>>> is common), this patch saves some unnecessary allocations for
>>>> non existing numa nodes.
>>>>
>>>> Even without that saving, perhaps patch makes code more readable.
>>>
>>> Do I understand correctly that node 0 must always be in
>>> node_possible_map? I ask, because we currently test
>>> lru->node[0].memcg_lrus to determine if the list is memcg aware.
>>>
>>
>> Yes, node 0 is always there. So it should not be a problem.
>
> I think it should be mentioned in the comment to list_lru_memcg_aware
> then.
>

Something like this: ?
static inline bool list_lru_memcg_aware(struct list_lru *lru)
{
         /*
          * This needs node 0 to be always present, even
          * in the systems supporting sparse numa ids.
          */
         return !!lru->node[0].memcg_lrus;
}
Vladimir Davydov Sept. 14, 2015, 1:27 p.m. UTC | #5
On Mon, Sep 14, 2015 at 06:35:59PM +0530, Raghavendra K T wrote:
> On 09/14/2015 05:34 PM, Vladimir Davydov wrote:
> >On Mon, Sep 14, 2015 at 05:09:31PM +0530, Raghavendra K T wrote:
> >>On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
> >>>On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
> >>>>The functions used in the patch are in slowpath, which gets called
> >>>>whenever alloc_super is called during mounts.
> >>>>
> >>>>Though this should not make difference for the architectures with
> >>>>sequential numa node ids, for the powerpc which can potentially have
> >>>>sparse node ids (for e.g., 4 node system having numa ids, 0,1,16,17
> >>>>is common), this patch saves some unnecessary allocations for
> >>>>non existing numa nodes.
> >>>>
> >>>>Even without that saving, perhaps patch makes code more readable.
> >>>
> >>>Do I understand correctly that node 0 must always be in
> >>>node_possible_map? I ask, because we currently test
> >>>lru->node[0].memcg_lrus to determine if the list is memcg aware.
> >>>
> >>
> >>Yes, node 0 is always there. So it should not be a problem.
> >
> >I think it should be mentioned in the comment to list_lru_memcg_aware
> >then.
> >
> 
> Something like this: ?

Yeah, looks good to me.

Thanks,
Vladimir

> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> {
>         /*
>          * This needs node 0 to be always present, even
>          * in the systems supporting sparse numa ids.
>          */
>         return !!lru->node[0].memcg_lrus;
> }
> 
>
diff mbox

Patch

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 909eca2..5a97f83 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -377,7 +377,7 @@  static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
 {
 	int i;
 
-	for (i = 0; i < nr_node_ids; i++) {
+	for_each_node(i) {
 		if (!memcg_aware)
 			lru->node[i].memcg_lrus = NULL;
 		else if (memcg_init_list_lru_node(&lru->node[i]))
@@ -385,8 +385,11 @@  static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
 	}
 	return 0;
 fail:
-	for (i = i - 1; i >= 0; i--)
+	for (i = i - 1; i >= 0; i--) {
+		if (!lru->node[i].memcg_lrus)
+			continue;
 		memcg_destroy_list_lru_node(&lru->node[i]);
+	}
 	return -ENOMEM;
 }
 
@@ -397,7 +400,7 @@  static void memcg_destroy_list_lru(struct list_lru *lru)
 	if (!list_lru_memcg_aware(lru))
 		return;
 
-	for (i = 0; i < nr_node_ids; i++)
+	for_each_node(i)
 		memcg_destroy_list_lru_node(&lru->node[i]);
 }
 
@@ -409,16 +412,20 @@  static int memcg_update_list_lru(struct list_lru *lru,
 	if (!list_lru_memcg_aware(lru))
 		return 0;
 
-	for (i = 0; i < nr_node_ids; i++) {
+	for_each_node(i) {
 		if (memcg_update_list_lru_node(&lru->node[i],
 					       old_size, new_size))
 			goto fail;
 	}
 	return 0;
 fail:
-	for (i = i - 1; i >= 0; i--)
+	for (i = i - 1; i >= 0; i--) {
+		if (!lru->node[i].memcg_lrus)
+			continue;
+
 		memcg_cancel_update_list_lru_node(&lru->node[i],
 						  old_size, new_size);
+	}
 	return -ENOMEM;
 }
 
@@ -430,7 +437,7 @@  static void memcg_cancel_update_list_lru(struct list_lru *lru,
 	if (!list_lru_memcg_aware(lru))
 		return;
 
-	for (i = 0; i < nr_node_ids; i++)
+	for_each_node(i)
 		memcg_cancel_update_list_lru_node(&lru->node[i],
 						  old_size, new_size);
 }
@@ -485,7 +492,7 @@  static void memcg_drain_list_lru(struct list_lru *lru,
 	if (!list_lru_memcg_aware(lru))
 		return;
 
-	for (i = 0; i < nr_node_ids; i++)
+	for_each_node(i)
 		memcg_drain_list_lru_node(&lru->node[i], src_idx, dst_idx);
 }
 
@@ -522,7 +529,7 @@  int __list_lru_init(struct list_lru *lru, bool memcg_aware,
 	if (!lru->node)
 		goto out;
 
-	for (i = 0; i < nr_node_ids; i++) {
+	for_each_node(i) {
 		spin_lock_init(&lru->node[i].lock);
 		if (key)
 			lockdep_set_class(&lru->node[i].lock, key);