mbox series

[GIT,PULL] arm64 fixes for 5.11-rc6

Message ID 20210129190322.GA4590@gaia
State New
Headers show
Series [GIT,PULL] arm64 fixes for 5.11-rc6 | expand

Pull-request

git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux tags/arm64-fixes

Message

Catalin Marinas Jan. 29, 2021, 7:03 p.m. UTC
Hi Linus,

Please pull the arm64 fixes below. Thanks.

The following changes since commit 75bd4bff300b3c5252d4a0e7a959569c62d1dbae:

  arm64: kprobes: Fix Uexpected kernel BRK exception at EL1 (2021-01-22 16:05:29 +0000)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux tags/arm64-fixes

for you to fetch changes up to a1df829ead5877d4a1061e976a50e2e665a16f24:

  ACPI/IORT: Do not blindly trust DMA masks from firmware (2021-01-27 12:26:24 +0000)

----------------------------------------------------------------
arm64 fixes:

- Fix the virt_addr_valid() returning true for < PAGE_OFFSET addresses.

- Do not blindly trust the DMA masks from ACPI/IORT.

----------------------------------------------------------------
Moritz Fischer (1):
      ACPI/IORT: Do not blindly trust DMA masks from firmware

Vincenzo Frascino (1):
      arm64: Fix kernel address detection of __is_lm_address()

 arch/arm64/include/asm/memory.h |  6 ++++--
 drivers/acpi/arm64/iort.c       | 14 ++++++++++++--
 2 files changed, 16 insertions(+), 4 deletions(-)

Comments

Linus Torvalds Jan. 29, 2021, 10:09 p.m. UTC | #1
On Fri, Jan 29, 2021 at 11:03 AM Catalin Marinas
<catalin.marinas@arm.com> wrote:
>
> arm64 fixes:
>
> - Fix the virt_addr_valid() returning true for < PAGE_OFFSET addresses.

That's a really odd fix.

It went from an incorrect bitwise operation (masking) to an _odd_
bitwise operation (xor).

Yes, PAGE_OFFSET has the bit pattern of all upper bits set, so "(addr
^ PAGE_OFFSET)" by definition reverses the upper bits - and for a
valid case turns them to zero.

But isn't the *logical* thing to do to use a subtract instead? For the
valid cases, the two do the same thing (clear the upper bits), but
just conceptually, isn't the operation that you actually want to do
"(addr - PAGE_OFFSET)"?

IOW, why is it using that odd xor pattern that doesn't make much
sense? I believe it _works_, but it looks very strange to me.

Also, shouldn't _lm_to_phys() do the same? It does that "mask upper
bits" too that was problematic in __is_lm_address(). Again, shouldn't
that logically be a subtract op?

             Linus
pr-tracker-bot@kernel.org Jan. 29, 2021, 10:12 p.m. UTC | #2
The pull request you sent on Fri, 29 Jan 2021 19:03:24 +0000:

> git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux tags/arm64-fixes

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/0e9bcda5d286f4a26a5407bb38f55c55b453ecfb

Thank you!
Catalin Marinas Jan. 31, 2021, 6:54 p.m. UTC | #3
On Fri, Jan 29, 2021 at 02:09:05PM -0800, Linus Torvalds wrote:
> On Fri, Jan 29, 2021 at 11:03 AM Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> >
> > arm64 fixes:
> >
> > - Fix the virt_addr_valid() returning true for < PAGE_OFFSET addresses.
> 
> That's a really odd fix.
> 
> It went from an incorrect bitwise operation (masking) to an _odd_
> bitwise operation (xor).
> 
> Yes, PAGE_OFFSET has the bit pattern of all upper bits set, so "(addr
> ^ PAGE_OFFSET)" by definition reverses the upper bits - and for a
> valid case turns them to zero.
> 
> But isn't the *logical* thing to do to use a subtract instead? For the
> valid cases, the two do the same thing (clear the upper bits), but
> just conceptually, isn't the operation that you actually want to do
> "(addr - PAGE_OFFSET)"?
> 
> IOW, why is it using that odd xor pattern that doesn't make much
> sense? I believe it _works_, but it looks very strange to me.

This macro used to test a single bit and it evolved into a bitmask. So,
yes, basically what we need is:

#define __is_lm_address(addr)	((u64)(addr) >= PAGE_OFFSET && \
				 (u64)(addr) < PAGE_END)

I wasn't sure whether the code generation with two comparisons is
similar to the xor variant but the compiler should probably be smart
enough to use CMP and CCMP. In the grand scheme, it probably doesn't
even matter.

Unless I miss something, I don't see any overflow issues even if we do
(((u64)addr - PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)).

We can backport the fix already upstream and clean-up the code in
mainline going forward (after some sanity check on the code generation).
It would be easier to parse in the future.

> Also, shouldn't _lm_to_phys() do the same? It does that "mask upper
> bits" too that was problematic in __is_lm_address(). Again, shouldn't
> that logically be a subtract op?

Yes, that's similar and a subtract should do.
Ard Biesheuvel Jan. 31, 2021, 11:07 p.m. UTC | #4
On Sun, 31 Jan 2021 at 19:55, Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Fri, Jan 29, 2021 at 02:09:05PM -0800, Linus Torvalds wrote:
> > On Fri, Jan 29, 2021 at 11:03 AM Catalin Marinas
> > <catalin.marinas@arm.com> wrote:
> > >
> > > arm64 fixes:
> > >
> > > - Fix the virt_addr_valid() returning true for < PAGE_OFFSET addresses.
> >
> > That's a really odd fix.
> >
> > It went from an incorrect bitwise operation (masking) to an _odd_
> > bitwise operation (xor).
> >
> > Yes, PAGE_OFFSET has the bit pattern of all upper bits set, so "(addr
> > ^ PAGE_OFFSET)" by definition reverses the upper bits - and for a
> > valid case turns them to zero.
> >
> > But isn't the *logical* thing to do to use a subtract instead? For the
> > valid cases, the two do the same thing (clear the upper bits), but
> > just conceptually, isn't the operation that you actually want to do
> > "(addr - PAGE_OFFSET)"?
> >
> > IOW, why is it using that odd xor pattern that doesn't make much
> > sense? I believe it _works_, but it looks very strange to me.
>
> This macro used to test a single bit and it evolved into a bitmask. So,
> yes, basically what we need is:
>
> #define __is_lm_address(addr)   ((u64)(addr) >= PAGE_OFFSET && \
>                                  (u64)(addr) < PAGE_END)
>
> I wasn't sure whether the code generation with two comparisons is
> similar to the xor variant but the compiler should probably be smart
> enough to use CMP and CCMP. In the grand scheme, it probably doesn't
> even matter.
>
> Unless I miss something, I don't see any overflow issues even if we do
> (((u64)addr - PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)).
>
> We can backport the fix already upstream and clean-up the code in
> mainline going forward (after some sanity check on the code generation).
> It would be easier to parse in the future.
>
> > Also, shouldn't _lm_to_phys() do the same? It does that "mask upper
> > bits" too that was problematic in __is_lm_address(). Again, shouldn't
> > that logically be a subtract op?
>
> Yes, that's similar and a subtract should do.
>

The original bit test was written like that because it removes the
need to reason about a potential tag in the upper bits. I tried to
preserve that behavior when removing the guaranteed 1:1 split between
the vmalloc and linear regions, by masking with PAGE_OFFSET and
comparing with PAGE_END - PAGE_OFFSET, but unfortunately, both
approaches suffer from the issue fixed by this patch, i.e., that
virt_addr_valid(0x0) erroneously returns true.

I think both proposed fixes are appropriate, but they both reintroduce
the need to consider the tag. I don't know whether or where this could
pose a problem, but it needs to be taken into account.
Catalin Marinas Feb. 1, 2021, 10:50 a.m. UTC | #5
On Mon, Feb 01, 2021 at 12:07:52AM +0100, Ard Biesheuvel wrote:
> On Sun, 31 Jan 2021 at 19:55, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Fri, Jan 29, 2021 at 02:09:05PM -0800, Linus Torvalds wrote:
> > > On Fri, Jan 29, 2021 at 11:03 AM Catalin Marinas
> > > <catalin.marinas@arm.com> wrote:
> > > >
> > > > arm64 fixes:
> > > >
> > > > - Fix the virt_addr_valid() returning true for < PAGE_OFFSET addresses.
> > >
> > > That's a really odd fix.
> > >
> > > It went from an incorrect bitwise operation (masking) to an _odd_
> > > bitwise operation (xor).
> > >
> > > Yes, PAGE_OFFSET has the bit pattern of all upper bits set, so "(addr
> > > ^ PAGE_OFFSET)" by definition reverses the upper bits - and for a
> > > valid case turns them to zero.
> > >
> > > But isn't the *logical* thing to do to use a subtract instead? For the
> > > valid cases, the two do the same thing (clear the upper bits), but
> > > just conceptually, isn't the operation that you actually want to do
> > > "(addr - PAGE_OFFSET)"?
> > >
> > > IOW, why is it using that odd xor pattern that doesn't make much
> > > sense? I believe it _works_, but it looks very strange to me.
> >
> > This macro used to test a single bit and it evolved into a bitmask. So,
> > yes, basically what we need is:
> >
> > #define __is_lm_address(addr)   ((u64)(addr) >= PAGE_OFFSET && \
> >                                  (u64)(addr) < PAGE_END)
> >
> > I wasn't sure whether the code generation with two comparisons is
> > similar to the xor variant but the compiler should probably be smart
> > enough to use CMP and CCMP. In the grand scheme, it probably doesn't
> > even matter.
> >
> > Unless I miss something, I don't see any overflow issues even if we do
> > (((u64)addr - PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)).
> >
> > We can backport the fix already upstream and clean-up the code in
> > mainline going forward (after some sanity check on the code generation).
> > It would be easier to parse in the future.
> >
> > > Also, shouldn't _lm_to_phys() do the same? It does that "mask upper
> > > bits" too that was problematic in __is_lm_address(). Again, shouldn't
> > > that logically be a subtract op?
> >
> > Yes, that's similar and a subtract should do.
> 
> The original bit test was written like that because it removes the
> need to reason about a potential tag in the upper bits. I tried to
> preserve that behavior when removing the guaranteed 1:1 split between
> the vmalloc and linear regions, by masking with PAGE_OFFSET and
> comparing with PAGE_END - PAGE_OFFSET, but unfortunately, both
> approaches suffer from the issue fixed by this patch, i.e., that
> virt_addr_valid(0x0) erroneously returns true.
> 
> I think both proposed fixes are appropriate, but they both reintroduce
> the need to consider the tag. I don't know whether or where this could
> pose a problem, but it needs to be taken into account.

I think we get away with this but should be fixed. For example,
virt_addr_valid() call in slab.c depends on DEBUG_SLAB but KASAN (which
generates kernel tagged addresses) depends on !DEBUG_SLAB. Some of the
uaccess hardening like check_object_size() -> check_heap_object() may
be skipped but no error.

Anyway, I'll write a patch to cover tagged kernel addresses as well.
When the linear map was at the top of the address range, we used to
have:

#define _virt_addr_is_linear(kaddr)	\
	(__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)

Afterwards we kept the tagged addresses in mind (well, until the recent
"fix") but lost the check against user addresses with commit
68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using
__is_lm_address()").