diff mbox

[3/3] powerpc/mm: Add comments on vmemmap physical mapping

Message ID 20170406141450.16060-3-khandual@linux.vnet.ibm.com (mailing list archive)
State Accepted
Commit 39e46751839dfe4c34eb354eee1e278082fc9d07
Headers show

Commit Message

Anshuman Khandual April 6, 2017, 2:14 p.m. UTC
Adds some explaination on how the vmemmap based struct page layout's
physical mapping is allocated and tracked through linked list. It
also keeps note of a possible race condition.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
---
Previous discussions on this http://patchwork.ozlabs.org/patch/584110/
Michael Ellerman had agreed to take the comments alone.

 arch/powerpc/mm/init_64.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

Comments

Michael Ellerman June 29, 2017, 12:21 p.m. UTC | #1
On Thu, 2017-04-06 at 14:14:50 UTC, Anshuman Khandual wrote:
> Adds some explaination on how the vmemmap based struct page layout's
> physical mapping is allocated and tracked through linked list. It
> also keeps note of a possible race condition.
> 
> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/39e46751839dfe4c34eb354eee1e27

cheers
diff mbox

Patch

diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 2a15986..6e5c54d 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -118,8 +118,28 @@  static int __meminit vmemmap_populated(unsigned long start, int page_size)
 	return 0;
 }
 
+/*
+ * vmemmap virtual address space management does not have a traditonal page
+ * table to track which virtual struct pages are backed by physical mapping.
+ * The virtual to physical mappings are tracked in a simple linked list
+ * format. 'vmemmap_list' maintains the entire vmemmap physical mapping at
+ * all times where as the 'next' list maintains the available
+ * vmemmap_backing structures which have been deleted from the
+ * 'vmemmap_global' list during system runtime (memory hotplug remove
+ * operation). The freed 'vmemmap_backing' structures are reused later when
+ * new requests come in without allocating fresh memory. This pointer also
+ * tracks the allocated 'vmemmap_backing' structures as we allocate one
+ * full page memory at a time when we dont have any.
+ */
 struct vmemmap_backing *vmemmap_list;
 static struct vmemmap_backing *next;
+
+/* The same pointer 'next' tracks individual chunks inside the allocated
+ * full page during the boot time and again tracks the freeed nodes during
+ * runtime. It is racy but it does not happen as they are separated by the
+ * boot process. Will create problem if some how we have memory hotplug
+ * operation during boot !!
+ */
 static int num_left;
 static int num_freed;