diff mbox

[v3,2/2] powerpc/mm: Tracking vDSO remap

Message ID b6ce07f8e1e0d654371aee70bd8eac310456d0df.1427289960.git.ldufour@linux.vnet.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

Laurent Dufour March 25, 2015, 1:53 p.m. UTC
Some processes (CRIU) are moving the vDSO area using the mremap system
call. As a consequence the kernel reference to the vDSO base address is
no more valid and the signal return frame built once the vDSO has been
moved is not pointing to the new sigreturn address.

This patch handles vDSO remapping and unmapping.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/mmu_context.h | 36 +++++++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

Comments

Ingo Molnar March 25, 2015, 6:33 p.m. UTC | #1
* Laurent Dufour <ldufour@linux.vnet.ibm.com> wrote:

> +static inline void arch_unmap(struct mm_struct *mm,
> +			struct vm_area_struct *vma,
> +			unsigned long start, unsigned long end)
> +{
> +	if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
> +		mm->context.vdso_base = 0;
> +}

So AFAICS PowerPC can have multi-page vDSOs, right?

So what happens if I munmap() the middle or end of the vDSO? The above 
condition only seems to cover unmaps that affect the first page. I 
think 'affects any page' ought to be the right condition? (But I know 
nothing about PowerPC so I might be wrong.)


> +#define __HAVE_ARCH_REMAP
> +static inline void arch_remap(struct mm_struct *mm,
> +			      unsigned long old_start, unsigned long old_end,
> +			      unsigned long new_start, unsigned long new_end)
> +{
> +	/*
> +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> +	 * check to old_start == vdso_base.
> +	 */
> +	if (old_start == mm->context.vdso_base)
> +		mm->context.vdso_base = new_start;
> +}

mremap() doesn't allow moving multiple vmas, but it allows the 
movement of multi-page vmas and it also allows partial mremap()s, 
where it will split up a vma.

In particular, what happens if an mremap() is done with 
old_start == vdso_base, but a shorter end than the end of the vDSO? 
(i.e. a partial mremap() with fewer pages than the vDSO size)

Thanks,

	Ingo
Ingo Molnar March 25, 2015, 6:36 p.m. UTC | #2
* Ingo Molnar <mingo@kernel.org> wrote:

> > +#define __HAVE_ARCH_REMAP
> > +static inline void arch_remap(struct mm_struct *mm,
> > +			      unsigned long old_start, unsigned long old_end,
> > +			      unsigned long new_start, unsigned long new_end)
> > +{
> > +	/*
> > +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> > +	 * check to old_start == vdso_base.
> > +	 */
> > +	if (old_start == mm->context.vdso_base)
> > +		mm->context.vdso_base = new_start;
> > +}
> 
> mremap() doesn't allow moving multiple vmas, but it allows the 
> movement of multi-page vmas and it also allows partial mremap()s, 
> where it will split up a vma.

I.e. mremap() supports the shrinking (and growing) of vmas. In that 
case mremap() will unmap the end of the vma and will shrink the 
remaining vDSO vma.

Doesn't that result in a non-working vDSO that should zero out 
vdso_base?

Thanks,

	Ingo
Benjamin Herrenschmidt March 25, 2015, 9:09 p.m. UTC | #3
On Wed, 2015-03-25 at 19:33 +0100, Ingo Molnar wrote:
> * Laurent Dufour <ldufour@linux.vnet.ibm.com> wrote:
> 
> > +static inline void arch_unmap(struct mm_struct *mm,
> > +			struct vm_area_struct *vma,
> > +			unsigned long start, unsigned long end)
> > +{
> > +	if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
> > +		mm->context.vdso_base = 0;
> > +}
> 
> So AFAICS PowerPC can have multi-page vDSOs, right?
> 
> So what happens if I munmap() the middle or end of the vDSO? The above 
> condition only seems to cover unmaps that affect the first page. I 
> think 'affects any page' ought to be the right condition? (But I know 
> nothing about PowerPC so I might be wrong.)

You are right, we have at least two pages.
> 
> > +#define __HAVE_ARCH_REMAP
> > +static inline void arch_remap(struct mm_struct *mm,
> > +			      unsigned long old_start, unsigned long old_end,
> > +			      unsigned long new_start, unsigned long new_end)
> > +{
> > +	/*
> > +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> > +	 * check to old_start == vdso_base.
> > +	 */
> > +	if (old_start == mm->context.vdso_base)
> > +		mm->context.vdso_base = new_start;
> > +}
> 
> mremap() doesn't allow moving multiple vmas, but it allows the 
> movement of multi-page vmas and it also allows partial mremap()s, 
> where it will split up a vma.
> 
> In particular, what happens if an mremap() is done with 
> old_start == vdso_base, but a shorter end than the end of the vDSO? 
> (i.e. a partial mremap() with fewer pages than the vDSO size)

Is there a way to forbid splitting ? Does x86 deal with that case at all
or it doesn't have to for some other reason ?

Cheers,
Ben.

> Thanks,
> 
> 	Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
Benjamin Herrenschmidt March 25, 2015, 9:11 p.m. UTC | #4
On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote:
> * Ingo Molnar <mingo@kernel.org> wrote:
> 
> > > +#define __HAVE_ARCH_REMAP
> > > +static inline void arch_remap(struct mm_struct *mm,
> > > +			      unsigned long old_start, unsigned long old_end,
> > > +			      unsigned long new_start, unsigned long new_end)
> > > +{
> > > +	/*
> > > +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> > > +	 * check to old_start == vdso_base.
> > > +	 */
> > > +	if (old_start == mm->context.vdso_base)
> > > +		mm->context.vdso_base = new_start;
> > > +}
> > 
> > mremap() doesn't allow moving multiple vmas, but it allows the 
> > movement of multi-page vmas and it also allows partial mremap()s, 
> > where it will split up a vma.
> 
> I.e. mremap() supports the shrinking (and growing) of vmas. In that 
> case mremap() will unmap the end of the vma and will shrink the 
> remaining vDSO vma.
> 
> Doesn't that result in a non-working vDSO that should zero out 
> vdso_base?

Right. Now we can't completely prevent the user from shooting itself in
the foot I suppose, though there is a legit usage scenario which is to
move the vDSO around which it would be nice to support. I think it's
reasonable to put the onus on the user here to do the right thing.

Cheers,
Ben.

> Thanks,
> 
> 	Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
Ingo Molnar March 26, 2015, 9:43 a.m. UTC | #5
* Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:

> On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote:
> > * Ingo Molnar <mingo@kernel.org> wrote:
> > 
> > > > +#define __HAVE_ARCH_REMAP
> > > > +static inline void arch_remap(struct mm_struct *mm,
> > > > +			      unsigned long old_start, unsigned long old_end,
> > > > +			      unsigned long new_start, unsigned long new_end)
> > > > +{
> > > > +	/*
> > > > +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> > > > +	 * check to old_start == vdso_base.
> > > > +	 */
> > > > +	if (old_start == mm->context.vdso_base)
> > > > +		mm->context.vdso_base = new_start;
> > > > +}
> > > 
> > > mremap() doesn't allow moving multiple vmas, but it allows the 
> > > movement of multi-page vmas and it also allows partial mremap()s, 
> > > where it will split up a vma.
> > 
> > I.e. mremap() supports the shrinking (and growing) of vmas. In that 
> > case mremap() will unmap the end of the vma and will shrink the 
> > remaining vDSO vma.
> > 
> > Doesn't that result in a non-working vDSO that should zero out 
> > vdso_base?
> 
> Right. Now we can't completely prevent the user from shooting itself 
> in the foot I suppose, though there is a legit usage scenario which 
> is to move the vDSO around which it would be nice to support. I 
> think it's reasonable to put the onus on the user here to do the 
> right thing.

I argue we should use the right condition to clear vdso_base: if the 
vDSO gets at least partially unmapped. Otherwise there's little point 
in the whole patch: either correctly track whether the vDSO is OK, or 
don't ...

There's also the question of mprotect(): can users mprotect() the vDSO 
on PowerPC?

Thanks,

	Ingo
Ingo Molnar March 26, 2015, 9:48 a.m. UTC | #6
* Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:

> > > +#define __HAVE_ARCH_REMAP
> > > +static inline void arch_remap(struct mm_struct *mm,
> > > +			      unsigned long old_start, unsigned long old_end,
> > > +			      unsigned long new_start, unsigned long new_end)
> > > +{
> > > +	/*
> > > +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> > > +	 * check to old_start == vdso_base.
> > > +	 */
> > > +	if (old_start == mm->context.vdso_base)
> > > +		mm->context.vdso_base = new_start;
> > > +}
> > 
> > mremap() doesn't allow moving multiple vmas, but it allows the 
> > movement of multi-page vmas and it also allows partial mremap()s, 
> > where it will split up a vma.
> > 
> > In particular, what happens if an mremap() is done with 
> > old_start == vdso_base, but a shorter end than the end of the vDSO? 
> > (i.e. a partial mremap() with fewer pages than the vDSO size)
> 
> Is there a way to forbid splitting ? Does x86 deal with that case at 
> all or it doesn't have to for some other reason ?

So we use _install_special_mapping() - maybe PowerPC does that too? 
That adds VM_DONTEXPAND which ought to prevent some - but not all - of 
the VM API weirdnesses.

On x86 we'll just dump core if someone unmaps the vdso.

Thanks,

	Ingo
Laurent Dufour March 26, 2015, 10:13 a.m. UTC | #7
On 26/03/2015 10:48, Ingo Molnar wrote:
> 
> * Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> 
>>>> +#define __HAVE_ARCH_REMAP
>>>> +static inline void arch_remap(struct mm_struct *mm,
>>>> +			      unsigned long old_start, unsigned long old_end,
>>>> +			      unsigned long new_start, unsigned long new_end)
>>>> +{
>>>> +	/*
>>>> +	 * mremap() doesn't allow moving multiple vmas so we can limit the
>>>> +	 * check to old_start == vdso_base.
>>>> +	 */
>>>> +	if (old_start == mm->context.vdso_base)
>>>> +		mm->context.vdso_base = new_start;
>>>> +}
>>>
>>> mremap() doesn't allow moving multiple vmas, but it allows the 
>>> movement of multi-page vmas and it also allows partial mremap()s, 
>>> where it will split up a vma.
>>>
>>> In particular, what happens if an mremap() is done with 
>>> old_start == vdso_base, but a shorter end than the end of the vDSO? 
>>> (i.e. a partial mremap() with fewer pages than the vDSO size)
>>
>> Is there a way to forbid splitting ? Does x86 deal with that case at 
>> all or it doesn't have to for some other reason ?
> 
> So we use _install_special_mapping() - maybe PowerPC does that too? 
> That adds VM_DONTEXPAND which ought to prevent some - but not all - of 
> the VM API weirdnesses.

The same is done on PowerPC. So calling mremap() to extend the vDSO is
failing but splitting it or unmapping a part of it is allowed but lead
to an unusable vDSO.

> On x86 we'll just dump core if someone unmaps the vdso.

On PowerPC, you'll get the same result.

Should we prevent the user to break its vDSO ?

Thanks,
Laurent.
Laurent Dufour March 26, 2015, 10:37 a.m. UTC | #8
On 26/03/2015 10:43, Ingo Molnar wrote:
> 
> * Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> 
>> On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote:
>>> * Ingo Molnar <mingo@kernel.org> wrote:
>>>
>>>>> +#define __HAVE_ARCH_REMAP
>>>>> +static inline void arch_remap(struct mm_struct *mm,
>>>>> +			      unsigned long old_start, unsigned long old_end,
>>>>> +			      unsigned long new_start, unsigned long new_end)
>>>>> +{
>>>>> +	/*
>>>>> +	 * mremap() doesn't allow moving multiple vmas so we can limit the
>>>>> +	 * check to old_start == vdso_base.
>>>>> +	 */
>>>>> +	if (old_start == mm->context.vdso_base)
>>>>> +		mm->context.vdso_base = new_start;
>>>>> +}
>>>>
>>>> mremap() doesn't allow moving multiple vmas, but it allows the 
>>>> movement of multi-page vmas and it also allows partial mremap()s, 
>>>> where it will split up a vma.
>>>
>>> I.e. mremap() supports the shrinking (and growing) of vmas. In that 
>>> case mremap() will unmap the end of the vma and will shrink the 
>>> remaining vDSO vma.
>>>
>>> Doesn't that result in a non-working vDSO that should zero out 
>>> vdso_base?
>>
>> Right. Now we can't completely prevent the user from shooting itself 
>> in the foot I suppose, though there is a legit usage scenario which 
>> is to move the vDSO around which it would be nice to support. I 
>> think it's reasonable to put the onus on the user here to do the 
>> right thing.
> 
> I argue we should use the right condition to clear vdso_base: if the 
> vDSO gets at least partially unmapped. Otherwise there's little point 
> in the whole patch: either correctly track whether the vDSO is OK, or 
> don't ...

That's a good option, but it may be hard to achieve in the case the vDSO
area has been splitted in multiple pieces.

Not sure there is a right way to handle that, here this is a best
effort, allowing a process to unmap its vDSO and having the sigreturn
call done through the stack area (it has to make it executable).

Anyway I'll dig into that, assuming that the vdso_base pointer should be
clear if a part of the vDSO is moved or unmapped. The patch will be
larger since I'll have to get the vDSO size which is private to the
vdso.c file.

> There's also the question of mprotect(): can users mprotect() the vDSO 
> on PowerPC?

Yes, mprotect() the vDSO is allowed on PowerPC, as it is on x86, and
certainly all the other architectures.
Furthermore, if it is done on a partial part of the vDSO it is splitting
the vma...
Ingo Molnar March 26, 2015, 2:17 p.m. UTC | #9
* Laurent Dufour <ldufour@linux.vnet.ibm.com> wrote:

> > I argue we should use the right condition to clear vdso_base: if 
> > the vDSO gets at least partially unmapped. Otherwise there's 
> > little point in the whole patch: either correctly track whether 
> > the vDSO is OK, or don't ...
> 
> That's a good option, but it may be hard to achieve in the case the 
> vDSO area has been splitted in multiple pieces.
>
> Not sure there is a right way to handle that, here this is a best 
> effort, allowing a process to unmap its vDSO and having the 
> sigreturn call done through the stack area (it has to make it 
> executable).
> 
> Anyway I'll dig into that, assuming that the vdso_base pointer 
> should be clear if a part of the vDSO is moved or unmapped. The 
> patch will be larger since I'll have to get the vDSO size which is 
> private to the vdso.c file.

At least for munmap() I don't think that's a worry: once unmapped 
(even if just partially), vdso_base becomes zero and won't ever be set 
again.

So no need to track the zillion pieces, should there be any: Humpty 
Dumpty won't be whole again, right?

> > There's also the question of mprotect(): can users mprotect() the 
> > vDSO on PowerPC?
> 
> Yes, mprotect() the vDSO is allowed on PowerPC, as it is on x86, and 
> certainly all the other architectures. Furthermore, if it is done on 
> a partial part of the vDSO it is splitting the vma...

btw., CRIU's main purpose here is to reconstruct a vDSO that was 
originally randomized, but whose address must now be reproduced as-is, 
right?

In that sense detecting the 'good' mremap() as your patch does should 
do the trick and is certainly not objectionable IMHO - I was just 
wondering whether we could make a perfect job very simply.

Thanks,

	Ingo
Laurent Dufour March 26, 2015, 2:32 p.m. UTC | #10
On 26/03/2015 15:17, Ingo Molnar wrote:
> 
> * Laurent Dufour <ldufour@linux.vnet.ibm.com> wrote:
> 
>>> I argue we should use the right condition to clear vdso_base: if 
>>> the vDSO gets at least partially unmapped. Otherwise there's 
>>> little point in the whole patch: either correctly track whether 
>>> the vDSO is OK, or don't ...
>>
>> That's a good option, but it may be hard to achieve in the case the 
>> vDSO area has been splitted in multiple pieces.
>>
>> Not sure there is a right way to handle that, here this is a best 
>> effort, allowing a process to unmap its vDSO and having the 
>> sigreturn call done through the stack area (it has to make it 
>> executable).
>>
>> Anyway I'll dig into that, assuming that the vdso_base pointer 
>> should be clear if a part of the vDSO is moved or unmapped. The 
>> patch will be larger since I'll have to get the vDSO size which is 
>> private to the vdso.c file.
> 
> At least for munmap() I don't think that's a worry: once unmapped 
> (even if just partially), vdso_base becomes zero and won't ever be set 
> again.
> 
> So no need to track the zillion pieces, should there be any: Humpty 
> Dumpty won't be whole again, right?

My idea is to clear vdso_base if at least part of the vdso is unmap.
But since some part of the vdso may have been moved and unmapped later,
to be complete, the patch has to handle partial mremap() of the vDSO
too. Otherwise such a scenario will not be detected:

	new_area = mremap(vdso_base + page_size, ....);
	munmap(new_area,...);

>>> There's also the question of mprotect(): can users mprotect() the 
>>> vDSO on PowerPC?
>>
>> Yes, mprotect() the vDSO is allowed on PowerPC, as it is on x86, and 
>> certainly all the other architectures. Furthermore, if it is done on 
>> a partial part of the vDSO it is splitting the vma...
> 
> btw., CRIU's main purpose here is to reconstruct a vDSO that was 
> originally randomized, but whose address must now be reproduced as-is, 
> right?

You're right, CRIU has to move the vDSO to the same address it has at
checkpoint time.

> In that sense detecting the 'good' mremap() as your patch does should 
> do the trick and is certainly not objectionable IMHO - I was just 
> wondering whether we could make a perfect job very simply.

I'd try to address the perfect job, this may complexify the patch,
especially because the vdso's size is not recorded in the PowerPC
mm_context structure. Not sure it is a good idea to extend that structure..

Thanks,
Laurent.
Laurent Dufour March 26, 2015, 5:37 p.m. UTC | #11
CRIU is recreating the process memory layout by remapping the checkpointee
memory area on top of the current process (criu). This includes remapping
the vDSO to the place it has at checkpoint time.

However some architectures like powerpc are keeping a reference to the vDSO
base address to build the signal return stack frame by calling the vDSO
sigreturn service. So once the vDSO has been moved, this reference is no
more valid and the signal frame built later are not usable.

This patch serie is introducing a new mm hook 'arch_remap' which is called
when mremap is done and the mm lock still hold. The next patch is adding the
vDSO remap and unmap tracking to the powerpc architecture.

Changes in v4:
--------------
- Reviewing the PowerPC part of the patch to handle partial unmap and remap
  of the vDSO.

Changes in v3:
--------------
- Fixed grammatical error in a comment of the second patch. 
  Thanks again, Ingo.

Changes in v2:
--------------
- Following the Ingo Molnar's advice, enabling the call to arch_remap through
  the __HAVE_ARCH_REMAP macro. This reduces considerably the first patch.

Laurent Dufour (2):
  mm: Introducing arch_remap hook
  powerpc/mm: Tracking vDSO remap

 arch/powerpc/include/asm/mmu_context.h | 32 +++++++++++++++++++++++++++-
 arch/powerpc/kernel/vdso.c             | 39 ++++++++++++++++++++++++++++++++++
 mm/mremap.c                            | 11 ++++++++--
 3 files changed, 79 insertions(+), 3 deletions(-)
Benjamin Herrenschmidt March 26, 2015, 11:23 p.m. UTC | #12
On Thu, 2015-03-26 at 10:43 +0100, Ingo Molnar wrote:
> * Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> 
> > On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote:
> > > * Ingo Molnar <mingo@kernel.org> wrote:
> > > 
> > > > > +#define __HAVE_ARCH_REMAP
> > > > > +static inline void arch_remap(struct mm_struct *mm,
> > > > > +			      unsigned long old_start, unsigned long old_end,
> > > > > +			      unsigned long new_start, unsigned long new_end)
> > > > > +{
> > > > > +	/*
> > > > > +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> > > > > +	 * check to old_start == vdso_base.
> > > > > +	 */
> > > > > +	if (old_start == mm->context.vdso_base)
> > > > > +		mm->context.vdso_base = new_start;
> > > > > +}
> > > > 
> > > > mremap() doesn't allow moving multiple vmas, but it allows the 
> > > > movement of multi-page vmas and it also allows partial mremap()s, 
> > > > where it will split up a vma.
> > > 
> > > I.e. mremap() supports the shrinking (and growing) of vmas. In that 
> > > case mremap() will unmap the end of the vma and will shrink the 
> > > remaining vDSO vma.
> > > 
> > > Doesn't that result in a non-working vDSO that should zero out 
> > > vdso_base?
> > 
> > Right. Now we can't completely prevent the user from shooting itself 
> > in the foot I suppose, though there is a legit usage scenario which 
> > is to move the vDSO around which it would be nice to support. I 
> > think it's reasonable to put the onus on the user here to do the 
> > right thing.
> 
> I argue we should use the right condition to clear vdso_base: if the 
> vDSO gets at least partially unmapped. Otherwise there's little point 
> in the whole patch: either correctly track whether the vDSO is OK, or 
> don't ...

Well, if we are going to clear it at all yes, we should probably be a
bit smarter about it. My point however was we probably don't need to be
super robust about dealing with any crazy scenario userspace might
conceive.

> There's also the question of mprotect(): can users mprotect() the vDSO 
> on PowerPC?

Nothing prevents it. But here too, I wouldn't bother. The user might be
doing on purpose expecting to catch the resulting signal for example
(though arguably a signal from a sigreturn frame is ... odd).

Cheers,
Ben.
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 73382eba02dc..7d315c1898d4 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -8,7 +8,6 @@ 
 #include <linux/spinlock.h>
 #include <asm/mmu.h>	
 #include <asm/cputable.h>
-#include <asm-generic/mm_hooks.h>
 #include <asm/cputhreads.h>
 
 /*
@@ -109,5 +108,40 @@  static inline void enter_lazy_tlb(struct mm_struct *mm,
 #endif
 }
 
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+				 struct mm_struct *mm)
+{
+}
+
+static inline void arch_exit_mmap(struct mm_struct *mm)
+{
+}
+
+static inline void arch_unmap(struct mm_struct *mm,
+			struct vm_area_struct *vma,
+			unsigned long start, unsigned long end)
+{
+	if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
+		mm->context.vdso_base = 0;
+}
+
+static inline void arch_bprm_mm_init(struct mm_struct *mm,
+				     struct vm_area_struct *vma)
+{
+}
+
+#define __HAVE_ARCH_REMAP
+static inline void arch_remap(struct mm_struct *mm,
+			      unsigned long old_start, unsigned long old_end,
+			      unsigned long new_start, unsigned long new_end)
+{
+	/*
+	 * mremap() doesn't allow moving multiple vmas so we can limit the
+	 * check to old_start == vdso_base.
+	 */
+	if (old_start == mm->context.vdso_base)
+		mm->context.vdso_base = new_start;
+}
+
 #endif /* __KERNEL__ */
 #endif /* __ASM_POWERPC_MMU_CONTEXT_H */