diff mbox series

[v2,1/2] powerpc: Fix virt_addr_valid() check

Message ID 20211225120621.13908-2-wangkefeng.wang@huawei.com (mailing list archive)
State Superseded
Headers show
Series mm: Fix kernel BUG in __check_heap_object() on PowerPC64 | expand

Commit Message

Kefeng Wang Dec. 25, 2021, 12:06 p.m. UTC
When run ethtool eth0, the BUG occurred,

  usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
  kernel BUG at mm/usercopy.c:99
  ...
  usercopy_abort+0x64/0xa0 (unreliable)
  __check_heap_object+0x168/0x190
  __check_object_size+0x1a0/0x200
  dev_ethtool+0x2494/0x2b20
  dev_ioctl+0x5d0/0x770
  sock_do_ioctl+0xf0/0x1d0
  sock_ioctl+0x3ec/0x5a0
  __se_sys_ioctl+0xf0/0x160
  system_call_exception+0xfc/0x1f0
  system_call_common+0xf8/0x200

The code shows below,

  data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
  copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))

The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
on PowerPC64, which leads to the panic.

As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
the virt_addr_valid().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/powerpc/include/asm/page.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Kefeng Wang Jan. 8, 2022, 11:58 a.m. UTC | #1
Hi PPC maintainers, ping..

On 2021/12/25 20:06, Kefeng Wang wrote:
> When run ethtool eth0, the BUG occurred,
>
>    usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>    kernel BUG at mm/usercopy.c:99
>    ...
>    usercopy_abort+0x64/0xa0 (unreliable)
>    __check_heap_object+0x168/0x190
>    __check_object_size+0x1a0/0x200
>    dev_ethtool+0x2494/0x2b20
>    dev_ioctl+0x5d0/0x770
>    sock_do_ioctl+0xf0/0x1d0
>    sock_ioctl+0x3ec/0x5a0
>    __se_sys_ioctl+0xf0/0x160
>    system_call_exception+0xfc/0x1f0
>    system_call_common+0xf8/0x200
>
> The code shows below,
>
>    data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>    copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>
> The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
> on PowerPC64, which leads to the panic.
>
> As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
> and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
> the virt_addr_valid().
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>   arch/powerpc/include/asm/page.h | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 254687258f42..300d4c105a3a 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
>   #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
>   #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
>   
> -#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
> +#define virt_addr_valid(vaddr)	({						\
> +	unsigned long _addr = (unsigned long)vaddr;				\
> +	(unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\
> +})
>   
>   /*
>    * On Book-E parts we need __va to parse the device tree and we can't
Christophe Leroy Jan. 10, 2022, 8:01 a.m. UTC | #2
Le 25/12/2021 à 13:06, Kefeng Wang a écrit :
> When run ethtool eth0, the BUG occurred,
> 
>    usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>    kernel BUG at mm/usercopy.c:99
>    ...
>    usercopy_abort+0x64/0xa0 (unreliable)
>    __check_heap_object+0x168/0x190
>    __check_object_size+0x1a0/0x200
>    dev_ethtool+0x2494/0x2b20
>    dev_ioctl+0x5d0/0x770
>    sock_do_ioctl+0xf0/0x1d0
>    sock_ioctl+0x3ec/0x5a0
>    __se_sys_ioctl+0xf0/0x160
>    system_call_exception+0xfc/0x1f0
>    system_call_common+0xf8/0x200
> 
> The code shows below,
> 
>    data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>    copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
> 
> The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
> on PowerPC64, which leads to the panic.
> 
> As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
> and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
> the virt_addr_valid().

The change done by that commit only applies to PPC64.

The change you are doing applies to both PPC64 and PPC32. Did you verify 
the impact (or should I say the absence of impact) on PPC32 ?

> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>   arch/powerpc/include/asm/page.h | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 254687258f42..300d4c105a3a 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
>   #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
>   #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
>   
> -#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
> +#define virt_addr_valid(vaddr)	({						\
> +	unsigned long _addr = (unsigned long)vaddr;				\
> +	(unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\

_addr is already an 'unsigned long' so you shouldnt need to cast it.

> +})
>   
>   /*
>    * On Book-E parts we need __va to parse the device tree and we can't
Nicholas Piggin Jan. 11, 2022, 4:37 a.m. UTC | #3
Excerpts from Kefeng Wang's message of January 8, 2022 9:58 pm:
> Hi PPC maintainers, ping..

Hmm. I might have confused myself about this. I'm going back and
trying to work out what I was thinking when I suggested it. This
works on 64e because vmalloc space is below the kernel linear map,
right?

On 64s it is the other way around and it is still possible to enable 
flatmem on 64s. Altough we might just not hit the problem there because 
__pa() will not mask away the vmalloc offset for 64s so it will still 
return something that's outside the pfn_valid range for flatmem. That's 
very subtle though.

The checks added to __pa actually don't prevent vmalloc memory from
being passed to it either on 64s, only a more basic test.

I think 64s wants (addr >= PAGE_OFFSET && addr < KERN_VIRT_START) as
the condition.  Could possibly add that check to __pa as well to
catch vmalloc addresses.

Thanks,
Nick

> 
> On 2021/12/25 20:06, Kefeng Wang wrote:
>> When run ethtool eth0, the BUG occurred,
>>
>>    usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>>    kernel BUG at mm/usercopy.c:99
>>    ...
>>    usercopy_abort+0x64/0xa0 (unreliable)
>>    __check_heap_object+0x168/0x190
>>    __check_object_size+0x1a0/0x200
>>    dev_ethtool+0x2494/0x2b20
>>    dev_ioctl+0x5d0/0x770
>>    sock_do_ioctl+0xf0/0x1d0
>>    sock_ioctl+0x3ec/0x5a0
>>    __se_sys_ioctl+0xf0/0x160
>>    system_call_exception+0xfc/0x1f0
>>    system_call_common+0xf8/0x200
>>
>> The code shows below,
>>
>>    data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>    copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>
>> The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
>> on PowerPC64, which leads to the panic.
>>
>> As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
>> and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
>> the virt_addr_valid().
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   arch/powerpc/include/asm/page.h | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index 254687258f42..300d4c105a3a 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
>>   #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
>>   #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
>>   
>> -#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
>> +#define virt_addr_valid(vaddr)	({						\
>> +	unsigned long _addr = (unsigned long)vaddr;				\
>> +	(unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\
>> +})
>>   
>>   /*
>>    * On Book-E parts we need __va to parse the device tree and we can't
>
Christophe Leroy Jan. 11, 2022, 6:04 a.m. UTC | #4
Le 11/01/2022 à 05:37, Nicholas Piggin a écrit :
> Excerpts from Kefeng Wang's message of January 8, 2022 9:58 pm:
>> Hi PPC maintainers, ping..
> 
> Hmm. I might have confused myself about this. I'm going back and
> trying to work out what I was thinking when I suggested it. This
> works on 64e because vmalloc space is below the kernel linear map,
> right?
> 
> On 64s it is the other way around and it is still possible to enable
> flatmem on 64s. Altough we might just not hit the problem there because
> __pa() will not mask away the vmalloc offset for 64s so it will still
> return something that's outside the pfn_valid range for flatmem. That's
> very subtle though.

That's the way it works on PPC32 at least, so for me it's not chocking 
to have it work the same way on PPC64s.

The main issue here is the way __pa() works. On PPC32 __pa = va - 
PAGE_OFFSET, so it works correctly for any address.
On PPC64, __pa() works by masking out the 2 top bits instead of 
substracting PAGE_OFFSET, so the test must add a verification that we 
really have the 2 top bits set at first. This is what (addr >= 
PAGE_OFFSET) does. Once this first test is done, we can perfectly rely 
on pfn_valid() just like PPC32, I see absolutely no point in an 
additionnal test checking the addr is below KERN_VIRT_START.


> 
> The checks added to __pa actually don't prevent vmalloc memory from
> being passed to it either on 64s, only a more basic test.

That's correct. It is the role of pfn_valid() to check that.

Christophe

> 
> I think 64s wants (addr >= PAGE_OFFSET && addr < KERN_VIRT_START) as
> the condition.  Could possibly add that check to __pa as well to
> catch vmalloc addresses.
> 
> Thanks,
> Nick
> 
>>
>> On 2021/12/25 20:06, Kefeng Wang wrote:
>>> When run ethtool eth0, the BUG occurred,
>>>
>>>     usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>>>     kernel BUG at mm/usercopy.c:99
>>>     ...
>>>     usercopy_abort+0x64/0xa0 (unreliable)
>>>     __check_heap_object+0x168/0x190
>>>     __check_object_size+0x1a0/0x200
>>>     dev_ethtool+0x2494/0x2b20
>>>     dev_ioctl+0x5d0/0x770
>>>     sock_do_ioctl+0xf0/0x1d0
>>>     sock_ioctl+0x3ec/0x5a0
>>>     __se_sys_ioctl+0xf0/0x160
>>>     system_call_exception+0xfc/0x1f0
>>>     system_call_common+0xf8/0x200
>>>
>>> The code shows below,
>>>
>>>     data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>>     copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>>
>>> The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
>>> on PowerPC64, which leads to the panic.
>>>
>>> As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
>>> and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
>>> the virt_addr_valid().
>>>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> ---
>>>    arch/powerpc/include/asm/page.h | 5 ++++-
>>>    1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>>> index 254687258f42..300d4c105a3a 100644
>>> --- a/arch/powerpc/include/asm/page.h
>>> +++ b/arch/powerpc/include/asm/page.h
>>> @@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
>>>    #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
>>>    #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
>>>    
>>> -#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
>>> +#define virt_addr_valid(vaddr)	({						\
>>> +	unsigned long _addr = (unsigned long)vaddr;				\
>>> +	(unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\
>>> +})
>>>    
>>>    /*
>>>     * On Book-E parts we need __va to parse the device tree and we can't
>>
Kefeng Wang Jan. 19, 2022, 1:15 a.m. UTC | #5
On 2022/1/11 14:04, Christophe Leroy wrote:
>
> Le 11/01/2022 à 05:37, Nicholas Piggin a écrit :
>> Excerpts from Kefeng Wang's message of January 8, 2022 9:58 pm:
>>> Hi PPC maintainers, ping..
>> Hmm. I might have confused myself about this. I'm going back and
>> trying to work out what I was thinking when I suggested it. This
>> works on 64e because vmalloc space is below the kernel linear map,
>> right?
>>
>> On 64s it is the other way around and it is still possible to enable
>> flatmem on 64s. Altough we might just not hit the problem there because
>> __pa() will not mask away the vmalloc offset for 64s so it will still
>> return something that's outside the pfn_valid range for flatmem. That's
>> very subtle though.
> That's the way it works on PPC32 at least, so for me it's not chocking
> to have it work the same way on PPC64s.
>
> The main issue here is the way __pa() works. On PPC32 __pa = va -
> PAGE_OFFSET, so it works correctly for any address.
> On PPC64, __pa() works by masking out the 2 top bits instead of
> substracting PAGE_OFFSET, so the test must add a verification that we
> really have the 2 top bits set at first. This is what (addr >=
> PAGE_OFFSET) does. Once this first test is done, we can perfectly rely
> on pfn_valid() just like PPC32, I see absolutely no point in an
> additionnal test checking the addr is below KERN_VIRT_START.


Hi Christophe and Nicholas, for ppc32, I think we need check the upper 
limit,

eg,  addr >= PAGE_OFFSET && addr < high_memory

arch/powerpc/mm/mem.c:  high_memory = (void *) __va(max_low_pfn * 
PAGE_SIZE);

for ppc32 max_low_pfn is the upper low memory pfn,  and For ppc64, 
high_memory is

the max memory pfn, it looks good too, correct me if I'm wrong, if the 
above check

is ok, I will send a new v3,  thanks.




>
>
>> The checks added to __pa actually don't prevent vmalloc memory from
>> being passed to it either on 64s, only a more basic test.
> That's correct. It is the role of pfn_valid() to check that.
>
> Christophe
>
>> I think 64s wants (addr >= PAGE_OFFSET && addr < KERN_VIRT_START) as
>> the condition.  Could possibly add that check to __pa as well to
>> catch vmalloc addresses.
>>
>> Thanks,
>> Nick
>>
>>
Christophe Leroy Jan. 20, 2022, 7:31 a.m. UTC | #6
Le 19/01/2022 à 02:15, Kefeng Wang a écrit :
> 
> On 2022/1/11 14:04, Christophe Leroy wrote:
>>
>> Le 11/01/2022 à 05:37, Nicholas Piggin a écrit :
>>> Excerpts from Kefeng Wang's message of January 8, 2022 9:58 pm:
>>>> Hi PPC maintainers, ping..
>>> Hmm. I might have confused myself about this. I'm going back and
>>> trying to work out what I was thinking when I suggested it. This
>>> works on 64e because vmalloc space is below the kernel linear map,
>>> right?
>>>
>>> On 64s it is the other way around and it is still possible to enable
>>> flatmem on 64s. Altough we might just not hit the problem there because
>>> __pa() will not mask away the vmalloc offset for 64s so it will still
>>> return something that's outside the pfn_valid range for flatmem. That's
>>> very subtle though.
>> That's the way it works on PPC32 at least, so for me it's not chocking
>> to have it work the same way on PPC64s.
>>
>> The main issue here is the way __pa() works. On PPC32 __pa = va -
>> PAGE_OFFSET, so it works correctly for any address.
>> On PPC64, __pa() works by masking out the 2 top bits instead of
>> substracting PAGE_OFFSET, so the test must add a verification that we
>> really have the 2 top bits set at first. This is what (addr >=
>> PAGE_OFFSET) does. Once this first test is done, we can perfectly rely
>> on pfn_valid() just like PPC32, I see absolutely no point in an
>> additionnal test checking the addr is below KERN_VIRT_START.
> 
> 
> Hi Christophe and Nicholas, for ppc32, I think we need check the upper 
> limit,

Why ? Have you experimented any problem at all on PPC32 with the way it 
is done at the moment ?

I don't think we have to change PPC32 at all unless we have a real 
reason to do it.

> 
> eg,  addr >= PAGE_OFFSET && addr < high_memory

Isn't it exactly what pfn_valid() already do today ?
Why change that at all ?

Christophe

> 
> arch/powerpc/mm/mem.c:  high_memory = (void *) __va(max_low_pfn * 
> PAGE_SIZE);
> 
> for ppc32 max_low_pfn is the upper low memory pfn,  and For ppc64, 
> high_memory is
> 
> the max memory pfn, it looks good too, correct me if I'm wrong, if the 
> above check
> 
> is ok, I will send a new v3,  thanks.
> 
> 
> 
> 
>>
>>
>>> The checks added to __pa actually don't prevent vmalloc memory from
>>> being passed to it either on 64s, only a more basic test.
>> That's correct. It is the role of pfn_valid() to check that.
>>
>> Christophe
>>
>>> I think 64s wants (addr >= PAGE_OFFSET && addr < KERN_VIRT_START) as
>>> the condition.  Could possibly add that check to __pa as well to
>>> catch vmalloc addresses.
>>>
>>> Thanks,
>>> Nick
>>>
>>>
Kefeng Wang Jan. 20, 2022, 11:09 a.m. UTC | #7
On 2022/1/20 15:31, Christophe Leroy wrote:
>
> Le 19/01/2022 à 02:15, Kefeng Wang a écrit :
>> On 2022/1/11 14:04, Christophe Leroy wrote:
>>> Le 11/01/2022 à 05:37, Nicholas Piggin a écrit :
>>>> Excerpts from Kefeng Wang's message of January 8, 2022 9:58 pm:
>>>>> Hi PPC maintainers, ping..
>>>> Hmm. I might have confused myself about this. I'm going back and
>>>> trying to work out what I was thinking when I suggested it. This
>>>> works on 64e because vmalloc space is below the kernel linear map,
>>>> right?
>>>>
>>>> On 64s it is the other way around and it is still possible to enable
>>>> flatmem on 64s. Altough we might just not hit the problem there because
>>>> __pa() will not mask away the vmalloc offset for 64s so it will still
>>>> return something that's outside the pfn_valid range for flatmem. That's
>>>> very subtle though.
>>> That's the way it works on PPC32 at least, so for me it's not chocking
>>> to have it work the same way on PPC64s.
>>>
>>> The main issue here is the way __pa() works. On PPC32 __pa = va -
>>> PAGE_OFFSET, so it works correctly for any address.
>>> On PPC64, __pa() works by masking out the 2 top bits instead of
>>> substracting PAGE_OFFSET, so the test must add a verification that we
>>> really have the 2 top bits set at first. This is what (addr >=
>>> PAGE_OFFSET) does. Once this first test is done, we can perfectly rely
>>> on pfn_valid() just like PPC32, I see absolutely no point in an
>>> additionnal test checking the addr is below KERN_VIRT_START.
>>
>> Hi Christophe and Nicholas, for ppc32, I think we need check the upper
>> limit,
> Why ? Have you experimented any problem at all on PPC32 with the way it
> is done at the moment ?
>
> I don't think we have to change PPC32 at all unless we have a real
> reason to do it.

yes, I missed this commit in old kernel(lts5.10), you have fixed the 
upper limit.

commit 602946ec2f90d5bd965857753880db29d2d9a1e9
Author: Christophe Leroy <christophe.leroy@csgroup.eu>
Date:   Tue Oct 12 12:40:37 2021 +0200

     powerpc: Set max_mapnr correctly


>
>> eg,  addr >= PAGE_OFFSET && addr < high_memory
> Isn't it exactly what pfn_valid() already do today ?
> Why change that at all ?
>
> Christophe
>
>> arch/powerpc/mm/mem.c:  high_memory = (void *) __va(max_low_pfn *
>> PAGE_SIZE);
>>
>> for ppc32 max_low_pfn is the upper low memory pfn,  and For ppc64,
>> high_memory is
>>
>> the max memory pfn, it looks good too, correct me if I'm wrong, if the
>> above check
>>
>> is ok, I will send a new v3,  thanks.
>>
>>
>>
>>
>>>
>>>> The checks added to __pa actually don't prevent vmalloc memory from
>>>> being passed to it either on 64s, only a more basic test.
>>> That's correct. It is the role of pfn_valid() to check that.
>>>
>>> Christophe
>>>
>>>> I think 64s wants (addr >= PAGE_OFFSET && addr < KERN_VIRT_START) as
>>>> the condition.  Could possibly add that check to __pa as well to
>>>> catch vmalloc addresses.
>>>>
>>>> Thanks,
>>>> Nick
>>>>
>>> >
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 254687258f42..300d4c105a3a 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -132,7 +132,10 @@  static inline bool pfn_valid(unsigned long pfn)
 #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 
-#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
+#define virt_addr_valid(vaddr)	({						\
+	unsigned long _addr = (unsigned long)vaddr;				\
+	(unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\
+})
 
 /*
  * On Book-E parts we need __va to parse the device tree and we can't