Patchwork Relocatable kernel for ppc44x

login
register
mail settings
Submitter Suzuki Poulose
Date June 15, 2011, 6:13 a.m.
Message ID <4DF84D92.2030803@in.ibm.com>
Download mbox | patch
Permalink /patch/100478/
State Not Applicable
Headers show

Comments

Suzuki Poulose - June 15, 2011, 6:13 a.m.
On 06/14/11 17:34, Michal Simek wrote:
> Hi,
>
> have someone tried to support RELOCATABLE kernel on ppc44x?
As Josh, mentioned, I will be working on this. In fact I was trying a couple of
patches towards this on PPC440x. But, I am stuck in debugging the hang that I am
experiencing with the changes. I am setting up a RISCWatch processor probe to
debug the same.

Here is some information that I wanted to share :

The PPC440X currently uses 256M TLB entries to pin the lowmem. When we go for a
relocatable kernel we have to :

1) Restrict the kernel load address to be 256M aligned

OR

2) Use 16M TLB(the next possible TLB page size supported) entries till the first
256M and then use the 256M TLB entries for the rest of lowmem.

Option 1 is not feasible.

Towards this, I have tried a patch which uses 16M TLB entries to map the entire
lowmem on an ebony board. But that doesn't seem to work. I am setting up the JTAG
to debug the state.

I have attached the patch below for your reference. Any suggestions/comments would
be really helpful.


Thanks
Suzuki
David Laight - June 15, 2011, 9:30 a.m.
> The PPC440X currently uses 256M TLB entries to pin the 
> lowmem. When we go for a relocatable kernel we have to :
> 
> 1) Restrict the kernel load address to be 256M aligned
> 
> OR
> 
> 2) Use 16M TLB(the next possible TLB page size supported) 
> entries till the first
> 256M and then use the 256M TLB entries for the rest of lowmem.

What is wrong with:

3) Use 256M TLB entries with the lowest one including
   addresses below the kernel base.

Clearly the kernel shouldn't be accessing the addresses
below its base address - but that is true of a lot of
address space mapped into the kernel.

	David
John Williams - June 15, 2011, 9:38 a.m.
On Wed, Jun 15, 2011 at 11:30 AM, David Laight <David.Laight@aculab.com>wrote:

>
> > The PPC440X currently uses 256M TLB entries to pin the
> > lowmem. When we go for a relocatable kernel we have to :
> >
> > 1) Restrict the kernel load address to be 256M aligned
> >
> > OR
> >
> > 2) Use 16M TLB(the next possible TLB page size supported)
> > entries till the first
> > 256M and then use the 256M TLB entries for the rest of lowmem.
>
> What is wrong with:
>
> 3) Use 256M TLB entries with the lowest one including
>   addresses below the kernel base.
>
> Clearly the kernel shouldn't be accessing the addresses
> below its base address - but that is true of a lot of
> address space mapped into the kernel.
>

It gets mucky since we will then need need to assess how much of that 256M
mapping will be above the kernel base, determine if that is sufficient to
boot the kernel, if not then setup additional 16MB mappings and so on.  It
might be cleaner to just use multiple 16MB mappings directly?

By the way we have some patches to support a non-zero (but fixed) boot
address for PPC440.  They are against 2.6.31, it's pretty simple stuff
except also requires changes in the simpleboot wrapper.  We will post them
shortly since they are relevant to this discussion.

John
Benjamin Herrenschmidt - June 15, 2011, 10:11 a.m.
On Wed, 2011-06-15 at 11:43 +0530, Suzuki Poulose wrote:
> On 06/14/11 17:34, Michal Simek wrote:
> > Hi,
> >
> > have someone tried to support RELOCATABLE kernel on ppc44x?
> As Josh, mentioned, I will be working on this. In fact I was trying a couple of
> patches towards this on PPC440x. But, I am stuck in debugging the hang that I am
> experiencing with the changes. I am setting up a RISCWatch processor probe to
> debug the same.
> 
> Here is some information that I wanted to share :
> 
> The PPC440X currently uses 256M TLB entries to pin the lowmem. When we go for a
> relocatable kernel we have to :
> 
> 1) Restrict the kernel load address to be 256M aligned

Wait a minute ... :-)

There's a difference between having the kernel run from any address and
mapping the linear mapping not starting at 0.

Those are completely orthogonal.

I don't see why off hand you are changing the way the TLB is used. The
only possible change needed is to make sure the initial bolted entry set
by the asm code properly covers the kernel in whatever it's "current"
location is. The rest is a matter of fixing up the relocations...

Cheers,
Ben.

> OR
> 
> 2) Use 16M TLB(the next possible TLB page size supported) entries till the first
> 256M and then use the 256M TLB entries for the rest of lowmem.
> 
> Option 1 is not feasible.
> 
> Towards this, I have tried a patch which uses 16M TLB entries to map the entire
> lowmem on an ebony board. But that doesn't seem to work. I am setting up the JTAG
> to debug the state.
> 
> I have attached the patch below for your reference. Any suggestions/comments would
> be really helpful.
> 
> 
> Thanks
> Suzuki
> 
> ==============================
> 
> 
> Use 16M TLB pages to pin the lowmem on PPC440x.
> 
> ---
>   arch/powerpc/include/asm/mmu-44x.h |    9 +++++++++
>   arch/powerpc/kernel/head_44x.S     |    2 +-
>   arch/powerpc/mm/44x_mmu.c          |    2 +-
>   3 files changed, 11 insertions(+), 2 deletions(-)
> 
> Index: linux-2.6.38.1/arch/powerpc/include/asm/mmu-44x.h
> ===================================================================
> --- linux-2.6.38.1.orig/arch/powerpc/include/asm/mmu-44x.h
> +++ linux-2.6.38.1/arch/powerpc/include/asm/mmu-44x.h
> @@ -121,7 +121,12 @@ typedef struct {
>   #endif
>   
>   /* Size of the TLBs used for pinning in lowmem */
> +#define PPC_PIN_SIZE	(1 << 24)	/* 16M */
> +#define PPC44x_TLB_PIN_SIZE	PPC44x_TLB_16M
> +#if 0
>   #define PPC_PIN_SIZE	(1 << 28)	/* 256M */
> +#define PPC44x_TLB_PIN_SIZE	PPC44x_TLB_256M
> +#endif
>   
>   #if (PAGE_SHIFT == 12)
>   #define PPC44x_TLBE_SIZE	PPC44x_TLB_4K
> @@ -142,7 +147,11 @@ typedef struct {
>   #error "Unsupported PAGE_SIZE"
>   #endif
>   
> +#if 0
>   #define mmu_linear_psize	MMU_PAGE_256M
> +#else
> +#define mmu_linear_psize	MMU_PAGE_16M
> +#endif
>   
>   #define PPC44x_PGD_OFF_SHIFT	(32 - PGDIR_SHIFT + PGD_T_LOG2)
>   #define PPC44x_PGD_OFF_MASK_BIT	(PGDIR_SHIFT - PGD_T_LOG2)
> Index: linux-2.6.38.1/arch/powerpc/kernel/head_44x.S
> ===================================================================
> --- linux-2.6.38.1.orig/arch/powerpc/kernel/head_44x.S
> +++ linux-2.6.38.1/arch/powerpc/kernel/head_44x.S
> @@ -805,7 +805,7 @@ skpinv:	addi	r4,r4,1				/* Increment */
>   
>   	/* pageid fields */
>   	clrrwi	r3,r3,10		/* Mask off the effective page number */
> -	ori	r3,r3,PPC44x_TLB_VALID | PPC44x_TLB_256M
> +	ori	r3,r3,PPC44x_TLB_VALID | PPC44x_TLB_PIN_SIZE
>   
>   	/* xlat fields */
>   	clrrwi	r4,r4,10		/* Mask off the real page number */
> Index: linux-2.6.38.1/arch/powerpc/mm/44x_mmu.c
> ===================================================================
> --- linux-2.6.38.1.orig/arch/powerpc/mm/44x_mmu.c
> +++ linux-2.6.38.1/arch/powerpc/mm/44x_mmu.c
> @@ -84,7 +84,7 @@ static void __init ppc44x_pin_tlb(unsign
>   	: "r" (PPC44x_TLB_SW | PPC44x_TLB_SR | PPC44x_TLB_SX | PPC44x_TLB_G),
>   #endif
>   	  "r" (phys),
> -	  "r" (virt | PPC44x_TLB_VALID | PPC44x_TLB_256M),
> +	  "r" (virt | PPC44x_TLB_VALID | PPC44x_TLB_PIN_SIZE),
>   	  "r" (entry),
>   	  "i" (PPC44x_TLB_PAGEID),
>   	  "i" (PPC44x_TLB_XLAT),
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
Benjamin Herrenschmidt - June 15, 2011, 10:14 a.m.
On Wed, 2011-06-15 at 10:30 +0100, David Laight wrote:
> > The PPC440X currently uses 256M TLB entries to pin the 
> > lowmem. When we go for a relocatable kernel we have to :
> > 
> > 1) Restrict the kernel load address to be 256M aligned
> > 
> > OR
> > 
> > 2) Use 16M TLB(the next possible TLB page size supported) 
> > entries till the first
> > 256M and then use the 256M TLB entries for the rest of lowmem.
> 
> What is wrong with:
> 
> 3) Use 256M TLB entries with the lowest one including
>    addresses below the kernel base.
> 
> Clearly the kernel shouldn't be accessing the addresses
> below its base address - but that is true of a lot of
> address space mapped into the kernel.

In the case of a relocatable kernel it's perfectly kosher to access
addresses below the kernel itself... Typically this is used for kdump
where the kdump kernel excecutes in place in a reserved area but access
to the rest of memory is allowed to ... well, do the dump :-) There
could be other reasons to do that too.

Cheers,
Ben.
Suzuki Poulose - June 15, 2011, 2:40 p.m.
On 06/15/11 15:41, Benjamin Herrenschmidt wrote:
> On Wed, 2011-06-15 at 11:43 +0530, Suzuki Poulose wrote:
>> On 06/14/11 17:34, Michal Simek wrote:
>>> Hi,
>>>
>>> have someone tried to support RELOCATABLE kernel on ppc44x?
>> As Josh, mentioned, I will be working on this. In fact I was trying a couple of
>> patches towards this on PPC440x. But, I am stuck in debugging the hang that I am
>> experiencing with the changes. I am setting up a RISCWatch processor probe to
>> debug the same.
>>
>> Here is some information that I wanted to share :
>>
>> The PPC440X currently uses 256M TLB entries to pin the lowmem. When we go for a
>> relocatable kernel we have to :
>>
>> 1) Restrict the kernel load address to be 256M aligned
>
> Wait a minute ... :-)
>
> There's a difference between having the kernel run from any address and
> mapping the linear mapping not starting at 0.
>
> Those are completely orthogonal.
>
> I don't see why off hand you are changing the way the TLB is used.

I started off with PHYSICAL_START support and that kind of hogged me into
this approach :-).
The
> only possible change needed is to make sure the initial bolted entry set
> by the asm code properly covers the kernel in whatever it's "current"
> location is. The rest is a matter of fixing up the relocations...
Could we do something like,

If kernel is loaded at X,

1. map : ((X-1) & ~256M) to PAGE_OFFSET and so on to cover the kernel in 256M
chunks.
2. Then process the relocations with (X % 256M)

Thanks

Suzuki
Tirumala Marri - June 15, 2011, 4:02 p.m.
On Wed, Jun 15, 2011 at 7:40 AM, Suzuki Poulose <suzuki@in.ibm.com> wrote:

> On 06/15/11 15:41, Benjamin Herrenschmidt wrote:
>
>> On Wed, 2011-06-15 at 11:43 +0530, Suzuki Poulose wrote:
>>
>>> On 06/14/11 17:34, Michal Simek wrote:
>>>
>>>> Hi,
>>>>
>>>> have someone tried to support RELOCATABLE kernel on ppc44x?
>>>>
>>> As Josh, mentioned, I will be working on this. In fact I was trying a
>>> couple of
>>> patches towards this on PPC440x. But, I am stuck in debugging the hang
>>> that I am
>>> experiencing with the changes. I am setting up a RISCWatch processor
>>> probe to
>>> debug the same.
>>>
>>> Here is some information that I wanted to share :
>>>
>>> The PPC440X currently uses 256M TLB entries to pin the lowmem. When we go
>>> for a
>>> relocatable kernel we have to :
>>>
>>> 1) Restrict the kernel load address to be 256M aligned
>>>
>>
>> Wait a minute ... :-)
>>
>> There's a difference between having the kernel run from any address and
>> mapping the linear mapping not starting at 0.
>>
>> Those are completely orthogonal.
>>
>> I don't see why off hand you are changing the way the TLB is used.
>>
>
> I started off with PHYSICAL_START support and that kind of hogged me into
> this approach :-).
>
> The
>
>> only possible change needed is to make sure the initial bolted entry set
>> by the asm code properly covers the kernel in whatever it's "current"
>> location is. The rest is a matter of fixing up the relocations...
>>
> Could we do something like,
>
> If kernel is loaded at X,
>
> 1. map : ((X-1) & ~256M) to PAGE_OFFSET and so on to cover the kernel in
> 256M
> chunks.
> 2. Then process the relocations with (X % 256M)
>
> Thanks
>
> [marri] I had to deal with kernel relocation to non-zero physical address.
I  hacked
few places to make this work. In my case there were holes(mutliple of 250MB)
in
the low-memory region.  To handle these memory holes I manipulated "lmb"
structure.

 I had to depend on the bootloader making sure that it is running from
non-zero
physical address and during linux boot it checks for the current TLB where
it is
running from and creates the same new TLB in linux. And everything else
should
take care of it.

--marri
Scott Wood - June 15, 2011, 7:47 p.m.
On Wed, 15 Jun 2011 20:11:55 +1000
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:

> On Wed, 2011-06-15 at 11:43 +0530, Suzuki Poulose wrote:
> > On 06/14/11 17:34, Michal Simek wrote:
> > > Hi,
> > >
> > > have someone tried to support RELOCATABLE kernel on ppc44x?
> > As Josh, mentioned, I will be working on this. In fact I was trying a couple of
> > patches towards this on PPC440x. But, I am stuck in debugging the hang that I am
> > experiencing with the changes. I am setting up a RISCWatch processor probe to
> > debug the same.
> > 
> > Here is some information that I wanted to share :
> > 
> > The PPC440X currently uses 256M TLB entries to pin the lowmem. When we go for a
> > relocatable kernel we have to :
> > 
> > 1) Restrict the kernel load address to be 256M aligned
> 
> Wait a minute ... :-)
> 
> There's a difference between having the kernel run from any address and
> mapping the linear mapping not starting at 0.
> 
> Those are completely orthogonal.
> 
> I don't see why off hand you are changing the way the TLB is used. The
> only possible change needed is to make sure the initial bolted entry set
> by the asm code properly covers the kernel in whatever it's "current"
> location is. The rest is a matter of fixing up the relocations...

Changing where the linear mapping points to is useful for AMP configurations
where you're supposed to be considering your memory to be a subset of the
real memory.  This is implemented on e500 as CONFIG_RELOCATABLE, though
it should have been called something different since it's not really
building a relocatable kernel (unlike what 64-bit does with
CONFIG_RELOCATABLE).

-Scott

Patch

==============================


Use 16M TLB pages to pin the lowmem on PPC440x.

---
  arch/powerpc/include/asm/mmu-44x.h |    9 +++++++++
  arch/powerpc/kernel/head_44x.S     |    2 +-
  arch/powerpc/mm/44x_mmu.c          |    2 +-
  3 files changed, 11 insertions(+), 2 deletions(-)

Index: linux-2.6.38.1/arch/powerpc/include/asm/mmu-44x.h
===================================================================
--- linux-2.6.38.1.orig/arch/powerpc/include/asm/mmu-44x.h
+++ linux-2.6.38.1/arch/powerpc/include/asm/mmu-44x.h
@@ -121,7 +121,12 @@  typedef struct {
  #endif
  
  /* Size of the TLBs used for pinning in lowmem */
+#define PPC_PIN_SIZE	(1 << 24)	/* 16M */
+#define PPC44x_TLB_PIN_SIZE	PPC44x_TLB_16M
+#if 0
  #define PPC_PIN_SIZE	(1 << 28)	/* 256M */
+#define PPC44x_TLB_PIN_SIZE	PPC44x_TLB_256M
+#endif
  
  #if (PAGE_SHIFT == 12)
  #define PPC44x_TLBE_SIZE	PPC44x_TLB_4K
@@ -142,7 +147,11 @@  typedef struct {
  #error "Unsupported PAGE_SIZE"
  #endif
  
+#if 0
  #define mmu_linear_psize	MMU_PAGE_256M
+#else
+#define mmu_linear_psize	MMU_PAGE_16M
+#endif
  
  #define PPC44x_PGD_OFF_SHIFT	(32 - PGDIR_SHIFT + PGD_T_LOG2)
  #define PPC44x_PGD_OFF_MASK_BIT	(PGDIR_SHIFT - PGD_T_LOG2)
Index: linux-2.6.38.1/arch/powerpc/kernel/head_44x.S
===================================================================
--- linux-2.6.38.1.orig/arch/powerpc/kernel/head_44x.S
+++ linux-2.6.38.1/arch/powerpc/kernel/head_44x.S
@@ -805,7 +805,7 @@  skpinv:	addi	r4,r4,1				/* Increment */
  
  	/* pageid fields */
  	clrrwi	r3,r3,10		/* Mask off the effective page number */
-	ori	r3,r3,PPC44x_TLB_VALID | PPC44x_TLB_256M
+	ori	r3,r3,PPC44x_TLB_VALID | PPC44x_TLB_PIN_SIZE
  
  	/* xlat fields */
  	clrrwi	r4,r4,10		/* Mask off the real page number */
Index: linux-2.6.38.1/arch/powerpc/mm/44x_mmu.c
===================================================================
--- linux-2.6.38.1.orig/arch/powerpc/mm/44x_mmu.c
+++ linux-2.6.38.1/arch/powerpc/mm/44x_mmu.c
@@ -84,7 +84,7 @@  static void __init ppc44x_pin_tlb(unsign
  	: "r" (PPC44x_TLB_SW | PPC44x_TLB_SR | PPC44x_TLB_SX | PPC44x_TLB_G),
  #endif
  	  "r" (phys),
-	  "r" (virt | PPC44x_TLB_VALID | PPC44x_TLB_256M),
+	  "r" (virt | PPC44x_TLB_VALID | PPC44x_TLB_PIN_SIZE),
  	  "r" (entry),
  	  "i" (PPC44x_TLB_PAGEID),
  	  "i" (PPC44x_TLB_XLAT),