diff mbox

[2/3] slub: fallback to node_to_mem_node() node if allocating on memoryless node

Message ID 20140909190514.GE22906@linux.vnet.ibm.com (mailing list archive)
State Not Applicable
Delegated to: Michael Ellerman
Headers show

Commit Message

Nishanth Aravamudan Sept. 9, 2014, 7:05 p.m. UTC
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Update the SLUB code to search for partial slabs on the nearest node
with memory in the presence of memoryless nodes. Additionally, do not
consider it to be an ALLOC_NODE_MISMATCH (and deactivate the slab) when
a memoryless-node specified allocation goes off-node.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>

---
v1 -> v2 (Nishanth):
  Add commit message
  Clean-up conditions in get_partial()

Comments

Andrew Morton Sept. 10, 2014, 12:11 a.m. UTC | #1
On Tue, 9 Sep 2014 12:05:14 -0700 Nishanth Aravamudan <nacc@linux.vnet.ibm.com> wrote:

> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Update the SLUB code to search for partial slabs on the nearest node
> with memory in the presence of memoryless nodes. Additionally, do not
> consider it to be an ALLOC_NODE_MISMATCH (and deactivate the slab) when
> a memoryless-node specified allocation goes off-node.
> 
> ...
>
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1699,7 +1699,12 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node,
>  		struct kmem_cache_cpu *c)
>  {
>  	void *object;
> -	int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node;
> +	int searchnode = node;
> +
> +	if (node == NUMA_NO_NODE)
> +		searchnode = numa_mem_id();
> +	else if (!node_present_pages(node))
> +		searchnode = node_to_mem_node(node);

I expect a call to node_to_mem_node() will always be preceded by a test
of node_present_pages().  Perhaps node_to_mem_node() should just do the
node_present_pages() call itself?
Nishanth Aravamudan Sept. 10, 2014, 12:55 a.m. UTC | #2
On 09.09.2014 [17:11:25 -0700], Andrew Morton wrote:
> On Tue, 9 Sep 2014 12:05:14 -0700 Nishanth Aravamudan <nacc@linux.vnet.ibm.com> wrote:
> 
> > From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > 
> > Update the SLUB code to search for partial slabs on the nearest node
> > with memory in the presence of memoryless nodes. Additionally, do not
> > consider it to be an ALLOC_NODE_MISMATCH (and deactivate the slab) when
> > a memoryless-node specified allocation goes off-node.
> > 
> > ...
> >
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1699,7 +1699,12 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node,
> >  		struct kmem_cache_cpu *c)
> >  {
> >  	void *object;
> > -	int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node;
> > +	int searchnode = node;
> > +
> > +	if (node == NUMA_NO_NODE)
> > +		searchnode = numa_mem_id();
> > +	else if (!node_present_pages(node))
> > +		searchnode = node_to_mem_node(node);
> 
> I expect a call to node_to_mem_node() will always be preceded by a test
> of node_present_pages().  Perhaps node_to_mem_node() should just do the
> node_present_pages() call itself?

Really, we don't need that test here. We could always use the result of
node_to_mem_node() in the else. If memoryless nodes are not supported
(off in .config), then node_to_mem_node() trivially returns. If they are
supported, it returns the correct value for all nodes.
 
It's just an optimization (premature?) since we can avoid worrying (in
this path) about memoryless nodes if the node in question has memory.

And, in fact, in __slab_alloc(), we could do the following:

...
	int searchnode = node;

	if (node != NUMA_NO_NODE)
		searchnode = node_to_mem_node(node);

	if (node != searchnode &&
		unlikely(!node_match(page, searchnode))) {

...

which would minimize the impact to non-memoryless node NUMA configs.

Does that seem better to you? I can add comments to this patch as well.

Thanks,
Nish
diff mbox

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 3e8afcc07a76..497fdfed2f01 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1699,7 +1699,12 @@  static void *get_partial(struct kmem_cache *s, gfp_t flags, int node,
 		struct kmem_cache_cpu *c)
 {
 	void *object;
-	int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node;
+	int searchnode = node;
+
+	if (node == NUMA_NO_NODE)
+		searchnode = numa_mem_id();
+	else if (!node_present_pages(node))
+		searchnode = node_to_mem_node(node);
 
 	object = get_partial_node(s, get_node(s, searchnode), c, flags);
 	if (object || node != NUMA_NO_NODE)
@@ -2280,11 +2285,18 @@  static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 redo:
 
 	if (unlikely(!node_match(page, node))) {
-		stat(s, ALLOC_NODE_MISMATCH);
-		deactivate_slab(s, page, c->freelist);
-		c->page = NULL;
-		c->freelist = NULL;
-		goto new_slab;
+		int searchnode = node;
+
+		if (node != NUMA_NO_NODE && !node_present_pages(node))
+			searchnode = node_to_mem_node(node);
+
+		if (unlikely(!node_match(page, searchnode))) {
+			stat(s, ALLOC_NODE_MISMATCH);
+			deactivate_slab(s, page, c->freelist);
+			c->page = NULL;
+			c->freelist = NULL;
+			goto new_slab;
+		}
 	}
 
 	/*