diff mbox

[15/30] memory: add address_space_valid

Message ID 1369133851-1894-16-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini May 21, 2013, 10:57 a.m. UTC
The old-style IOMMU lets you check whether an access is valid in a
given DMAContext.  There is no equivalent for AddressSpace in the
memory API, implement it with a lookup of the dispatch tree.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 dma-helpers.c         |  5 +++++
 exec.c                | 25 +++++++++++++++++++++++++
 include/exec/memory.h | 14 ++++++++++++++
 include/sysemu/dma.h  |  3 ++-
 4 files changed, 46 insertions(+), 1 deletion(-)

Comments

David Gibson May 23, 2013, 12:05 p.m. UTC | #1
On Tue, May 21, 2013 at 12:57:16PM +0200, Paolo Bonzini wrote:
> The old-style IOMMU lets you check whether an access is valid in a
> given DMAContext.  There is no equivalent for AddressSpace in the
> memory API, implement it with a lookup of the dispatch tree.

I don't love the name - "address_space_valid" suggests to me it tests
the validity of the whole address space, not a specific range.  But
an obviously better name doesn't quickly occur to me.

Obviously I like the functionality, since I wrote that into the
DMAContext stuff specifically to support the spapr_llan driver :).


> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  dma-helpers.c         |  5 +++++
>  exec.c                | 25 +++++++++++++++++++++++++
>  include/exec/memory.h | 14 ++++++++++++++
>  include/sysemu/dma.h  |  3 ++-
>  4 files changed, 46 insertions(+), 1 deletion(-)
> 
> diff --git a/dma-helpers.c b/dma-helpers.c
> index 272632f..2962b69 100644
> --- a/dma-helpers.c
> +++ b/dma-helpers.c
> @@ -298,6 +298,11 @@ bool iommu_dma_memory_valid(DMAContext *dma, dma_addr_t addr, dma_addr_t len,
>              plen = len;
>          }
>  
> +        if (!address_space_valid(dma->as, paddr, len,
> +                                 dir == DMA_DIRECTION_FROM_DEVICE)) {
> +            return false;
> +        }
> +
>          len -= plen;
>          addr += plen;
>      }
> diff --git a/exec.c b/exec.c
> index 8d91221..8f1b507 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2079,6 +2079,31 @@ static void cpu_notify_map_clients(void)
>      }
>  }
>  
> +bool address_space_valid(AddressSpace *as, hwaddr addr, int len, bool is_write)
> +{
> +    AddressSpaceDispatch *d = as->dispatch;
> +    MemoryRegionSection *section;
> +    int l;
> +    hwaddr page;
> +
> +    while (len > 0) {
> +        page = addr & TARGET_PAGE_MASK;
> +        l = (page + TARGET_PAGE_SIZE) - addr;
> +        if (l > len) {
> +            l = len;
> +        }
> +        section = phys_page_find(d, addr >> TARGET_PAGE_BITS);
> +        if (section->mr == &io_mem_unassigned ||
> +            (is_write && section->mr->readonly)) {
> +            return false;
> +        }
> +
> +        len -= l;
> +        addr += l;
> +    }
> +    return true;
> +}
> +
>  /* Map a physical memory region into a host virtual address.
>   * May map a subset of the requested range, given by and returned in *plen.
>   * May return NULL if resources needed to perform the mapping are exhausted.
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 6ed593c..2e5fd11 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -860,6 +860,20 @@ void address_space_write(AddressSpace *as, hwaddr addr,
>   */
>  void address_space_read(AddressSpace *as, hwaddr addr, uint8_t *buf, int len);
>  
> +/* address_space_valid: check for validity of an address space range
> + *
> + * Check whether memory is assigned to the given address space range.
> + *
> + * For now, addr and len should be aligned to a page size.  This limitation
> + * will be lifted in the future.
> + *
> + * @as: #AddressSpace to be accessed
> + * @addr: address within that address space
> + * @len: length of the area to be checked
> + * @is_write: indicates the transfer direction
> + */
> +bool address_space_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
> +
>  /* address_space_map: map a physical memory region into a host virtual address
>   *
>   * May map a subset of the requested range, given by and returned in @plen.
> diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
> index a52c93a..2e239dc 100644
> --- a/include/sysemu/dma.h
> +++ b/include/sysemu/dma.h
> @@ -113,7 +113,8 @@ static inline bool dma_memory_valid(DMAContext *dma,
>                                      DMADirection dir)
>  {
>      if (!dma_has_iommu(dma)) {
> -        return true;
> +        return address_space_valid(dma->as, addr, len,
> +                                   dir == DMA_DIRECTION_FROM_DEVICE);
>      } else {
>          return iommu_dma_memory_valid(dma, addr, len, dir);
>      }
Jan Kiszka May 23, 2013, 2:22 p.m. UTC | #2
On 2013-05-23 14:05, David Gibson wrote:
> On Tue, May 21, 2013 at 12:57:16PM +0200, Paolo Bonzini wrote:
>> The old-style IOMMU lets you check whether an access is valid in a
>> given DMAContext.  There is no equivalent for AddressSpace in the
>> memory API, implement it with a lookup of the dispatch tree.
> 
> I don't love the name - "address_space_valid" suggests to me it tests
> the validity of the whole address space, not a specific range.  But
> an obviously better name doesn't quickly occur to me.

Not sure ATM if someone else suggested this earlier already, but what
about something like address_space_access_valid?

Jan
Paolo Bonzini May 23, 2013, 2:43 p.m. UTC | #3
Il 23/05/2013 16:22, Jan Kiszka ha scritto:
>>> >> The old-style IOMMU lets you check whether an access is valid in a
>>> >> given DMAContext.  There is no equivalent for AddressSpace in the
>>> >> memory API, implement it with a lookup of the dispatch tree.
>> > 
>> > I don't love the name - "address_space_valid" suggests to me it tests
>> > the validity of the whole address space, not a specific range.  But
>> > an obviously better name doesn't quickly occur to me.
> Not sure ATM if someone else suggested this earlier already, but what
> about something like address_space_access_valid?

Sure.

Paolo
Peter Maydell May 23, 2013, 6:04 p.m. UTC | #4
On 21 May 2013 11:57, Paolo Bonzini <pbonzini@redhat.com> wrote:
> +bool address_space_valid(AddressSpace *as, hwaddr addr, int len, bool is_write)
> +{
> +    AddressSpaceDispatch *d = as->dispatch;
> +    MemoryRegionSection *section;
> +    int l;
> +    hwaddr page;
> +
> +    while (len > 0) {
> +        page = addr & TARGET_PAGE_MASK;
> +        l = (page + TARGET_PAGE_SIZE) - addr;
> +        if (l > len) {
> +            l = len;
> +        }
> +        section = phys_page_find(d, addr >> TARGET_PAGE_BITS);
> +        if (section->mr == &io_mem_unassigned ||
> +            (is_write && section->mr->readonly)) {

It's kind of bogus that io_mem_unassigned is the only MemoryRegion
that can be unreadable. Why is 'readonly' a MemoryRegion attribute
and 'writeonly' not? Shouldn't we be calling the MemoryRegionOps
accepts() callback here? What about access alignment constraints
and access size restrictions? What if the validity of the range
changes between the time you asked and when you actually do the
access?

The whole API is kind of unconvincing really, especially since
the only thing we seem to use it for is some parameter checking
in spapr_llan.c (via a huge pile of intermediate wrappers).

thanks
-- PMM
Jan Kiszka May 24, 2013, 6:13 a.m. UTC | #5
On 2013-05-23 20:04, Peter Maydell wrote:
> On 21 May 2013 11:57, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> +bool address_space_valid(AddressSpace *as, hwaddr addr, int len, bool is_write)
>> +{
>> +    AddressSpaceDispatch *d = as->dispatch;
>> +    MemoryRegionSection *section;
>> +    int l;
>> +    hwaddr page;
>> +
>> +    while (len > 0) {
>> +        page = addr & TARGET_PAGE_MASK;
>> +        l = (page + TARGET_PAGE_SIZE) - addr;
>> +        if (l > len) {
>> +            l = len;
>> +        }
>> +        section = phys_page_find(d, addr >> TARGET_PAGE_BITS);
>> +        if (section->mr == &io_mem_unassigned ||
>> +            (is_write && section->mr->readonly)) {
> 
> It's kind of bogus that io_mem_unassigned is the only MemoryRegion
> that can be unreadable. Why is 'readonly' a MemoryRegion attribute
> and 'writeonly' not? Shouldn't we be calling the MemoryRegionOps
> accepts() callback here? What about access alignment constraints
> and access size restrictions? What if the validity of the range
> changes between the time you asked and when you actually do the
> access?
> 
> The whole API is kind of unconvincing really, especially since
> the only thing we seem to use it for is some parameter checking
> in spapr_llan.c (via a huge pile of intermediate wrappers).

I'll also have a use for it: replace isa_is_ioport_assigned.

Jan
Paolo Bonzini May 24, 2013, 8:02 a.m. UTC | #6
Il 23/05/2013 20:04, Peter Maydell ha scritto:
> Shouldn't we be calling the MemoryRegionOps
> accepts() callback here? What about access alignment constraints
> and access size restrictions?

Yes, we should.

> What if the validity of the range
> changes between the time you asked and when you actually do the
> access?

If that's a concern, you shouldn't use this API, you should just do the
access and rely on the return value of address_space_rw & friends.

(Or, at least for the foreseeable future, do the checks under the BQL).

Paolo
Jan Kiszka May 24, 2013, 10:28 a.m. UTC | #7
On 2013-05-24 08:13, Jan Kiszka wrote:
> On 2013-05-23 20:04, Peter Maydell wrote:
>> On 21 May 2013 11:57, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> +bool address_space_valid(AddressSpace *as, hwaddr addr, int len, bool is_write)
>>> +{
>>> +    AddressSpaceDispatch *d = as->dispatch;
>>> +    MemoryRegionSection *section;
>>> +    int l;
>>> +    hwaddr page;
>>> +
>>> +    while (len > 0) {
>>> +        page = addr & TARGET_PAGE_MASK;
>>> +        l = (page + TARGET_PAGE_SIZE) - addr;
>>> +        if (l > len) {
>>> +            l = len;
>>> +        }
>>> +        section = phys_page_find(d, addr >> TARGET_PAGE_BITS);
>>> +        if (section->mr == &io_mem_unassigned ||
>>> +            (is_write && section->mr->readonly)) {
>>
>> It's kind of bogus that io_mem_unassigned is the only MemoryRegion
>> that can be unreadable. Why is 'readonly' a MemoryRegion attribute
>> and 'writeonly' not? Shouldn't we be calling the MemoryRegionOps
>> accepts() callback here? What about access alignment constraints
>> and access size restrictions? What if the validity of the range
>> changes between the time you asked and when you actually do the
>> access?
>>
>> The whole API is kind of unconvincing really, especially since
>> the only thing we seem to use it for is some parameter checking
>> in spapr_llan.c (via a huge pile of intermediate wrappers).
> 
> I'll also have a use for it: replace isa_is_ioport_assigned.

But for this use case, something like
memory_region_access_valid(MemoryRegion *, ...) would be more helpful
because pci_address_space_io returns a memory region, not an address
space. Can we change it? dma_memory_valid could simply pass the address
space's root region.

Jan
Peter Maydell May 24, 2013, 10:50 a.m. UTC | #8
On 24 May 2013 07:13, Jan Kiszka <jan.kiszka@web.de> wrote:
> I'll also have a use for it: replace isa_is_ioport_assigned.

That seems like it's something different: it's asking
"has some other bit of QEMU registered a handler for
this ioport?", not "at this moment in time if I make
an I/O access will it succeed?" (which in principle
should actually go all the way through and call a
registered ioport handler which might then say "no,
for this specific access we will fail").

thanks
-- PMM
Peter Maydell May 24, 2013, 10:52 a.m. UTC | #9
On 24 May 2013 09:02, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 23/05/2013 20:04, Peter Maydell ha scritto:
>> Shouldn't we be calling the MemoryRegionOps
>> accepts() callback here? What about access alignment constraints
>> and access size restrictions?
>
> Yes, we should.
>
>> What if the validity of the range
>> changes between the time you asked and when you actually do the
>> access?
>
> If that's a concern, you shouldn't use this API, you should just do the
> access and rely on the return value of address_space_rw & friends.

So when *is* it a good idea to use this API? In real
hardware you don't usually get a "tell me whether this
access would succeed if I did it" bus operation -- you
just do the operation and the memory transaction either
succeeds or it doesn't. Are we modelling something that
really exists in hardware on spapr here?

thanks
-- PMM
Jan Kiszka May 24, 2013, 11:02 a.m. UTC | #10
On 2013-05-24 12:50, Peter Maydell wrote:
> On 24 May 2013 07:13, Jan Kiszka <jan.kiszka@web.de> wrote:
>> I'll also have a use for it: replace isa_is_ioport_assigned.
> 
> That seems like it's something different: it's asking
> "has some other bit of QEMU registered a handler for
> this ioport?", not "at this moment in time if I make
> an I/O access will it succeed?" (which in principle
> should actually go all the way through and call a
> registered ioport handler which might then say "no,
> for this specific access we will fail").

Hmm... Paolo suggested to use it. But given the current arguments I
guess it's better to stick with memory_region_find for that purpose.

Jan
Paolo Bonzini May 24, 2013, 12:58 p.m. UTC | #11
Il 24/05/2013 12:52, Peter Maydell ha scritto:
> On 24 May 2013 09:02, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> Il 23/05/2013 20:04, Peter Maydell ha scritto:
>>> Shouldn't we be calling the MemoryRegionOps
>>> accepts() callback here? What about access alignment constraints
>>> and access size restrictions?
>>
>> Yes, we should.
>>
>>> What if the validity of the range
>>> changes between the time you asked and when you actually do the
>>> access?
>>
>> If that's a concern, you shouldn't use this API, you should just do the
>> access and rely on the return value of address_space_rw & friends.
> 
> So when *is* it a good idea to use this API? In real
> hardware you don't usually get a "tell me whether this
> access would succeed if I did it" bus operation -- you
> just do the operation and the memory transaction either
> succeeds or it doesn't. Are we modelling something that
> really exists in hardware on spapr here?

Well, sPAPR is not hardware. :)  It is a paravirtualized interface.

Anyhow, I can eliminate all the references to unassigned and basically
do this:

bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write)
{
    MemoryRegionSection *section;
    hwaddr l, xlat;

    while (len > 0) {
        l = len;
        section = address_space_translate(as, addr, &xlat, &l, is_write);
        if (!memory_access_is_direct(section->mr, is_write)) {
            l = memory_access_size(l, addr);
            if (!memory_region_access_valid(section->mr, addr1, l) {
                return false;
            }
        }

        len -= l;
        addr += l;
    }
    return true;
}

It requires some more changes however.  I'll drop this patch.  If it's okay
for you, I'll send a pull request up to "memory: clean up phys_page_find"
and go on with the next series.

Paolo
Peter Maydell May 24, 2013, 1:27 p.m. UTC | #12
On 24 May 2013 13:58, Paolo Bonzini <pbonzini@redhat.com> wrote:
> If it's okay
> for you, I'll send a pull request up to "memory: clean up phys_page_find"
> and go on with the next series.

That's fine with me, but please don't forget to fix up
the doc comment for memory_region_find() (see comments on
patch 6/30). You can do that in a patch in the next series,
though, I guess.

thanks
-- PMM
Paolo Bonzini May 24, 2013, 1:33 p.m. UTC | #13
Il 24/05/2013 15:27, Peter Maydell ha scritto:
> On 24 May 2013 13:58, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> If it's okay
>> for you, I'll send a pull request up to "memory: clean up phys_page_find"
>> and go on with the next series.
> 
> That's fine with me, but please don't forget to fix up
> the doc comment for memory_region_find() (see comments on
> patch 6/30). You can do that in a patch in the next series,
> though, I guess.

What about this:

/**
 * memory_region_find: translate an address/size relative to a
 * MemoryRegion into a #MemoryRegionSection.
 *
 * Locates the first #MemoryRegion within @mr that overlaps the range
 * given by @addr and @size.
 *
 * Returns a #MemoryRegionSection that describes a contiguous overlap.
 * It will have the following characteristics:
 *    .@size = 0 iff no overlap was found
 *    .@mr is non-%NULL iff an overlap was found
 *
 * Remember that in the return value the @offset_within_region is
 * relative to the returned region (in the .@mr field), not to the
 * @mr argument.
 *
 * Similarly, the .@offset_within_address_space is relative to the
 * address space that contains both regions, the passed and the
 * returned one.  However, in the special case where the @mr argument
 * has no parent (and thus is the root of the address space), the
 * following will hold:
 *    .@offset_within_address_space >= @addr
 *    .@offset_within_address_space + .@size <= @addr + @size
 *
 * @mr: a MemoryRegion within which @addr is a relative address
 * @addr: start of the area within @as to be searched
 * @size: size of the area to be searched
 */

?
Paolo
Peter Maydell May 24, 2013, 1:38 p.m. UTC | #14
On 24 May 2013 14:33, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 24/05/2013 15:27, Peter Maydell ha scritto:
>> On 24 May 2013 13:58, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> If it's okay
>>> for you, I'll send a pull request up to "memory: clean up phys_page_find"
>>> and go on with the next series.
>>
>> That's fine with me, but please don't forget to fix up
>> the doc comment for memory_region_find() (see comments on
>> patch 6/30). You can do that in a patch in the next series,
>> though, I guess.
>
> What about this:
>
> /**
>  * memory_region_find: translate an address/size relative to a
>  * MemoryRegion into a #MemoryRegionSection.
>  *
>  * Locates the first #MemoryRegion within @mr that overlaps the range
>  * given by @addr and @size.
>  *
>  * Returns a #MemoryRegionSection that describes a contiguous overlap.
>  * It will have the following characteristics:
>  *    .@size = 0 iff no overlap was found
>  *    .@mr is non-%NULL iff an overlap was found
>  *
>  * Remember that in the return value the @offset_within_region is
>  * relative to the returned region (in the .@mr field), not to the
>  * @mr argument.
>  *
>  * Similarly, the .@offset_within_address_space is relative to the
>  * address space that contains both regions, the passed and the
>  * returned one.  However, in the special case where the @mr argument
>  * has no parent (and thus is the root of the address space), the
>  * following will hold:
>  *    .@offset_within_address_space >= @addr
>  *    .@offset_within_address_space + .@size <= @addr + @size
>  *
>  * @mr: a MemoryRegion within which @addr is a relative address
>  * @addr: start of the area within @as to be searched
>  * @size: size of the area to be searched
>  */

Yes, that looks OK to me.

thanks
-- PMM
David Gibson May 25, 2013, 3:44 a.m. UTC | #15
On Fri, May 24, 2013 at 11:52:17AM +0100, Peter Maydell wrote:
> On 24 May 2013 09:02, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > Il 23/05/2013 20:04, Peter Maydell ha scritto:
> >> Shouldn't we be calling the MemoryRegionOps
> >> accepts() callback here? What about access alignment constraints
> >> and access size restrictions?
> >
> > Yes, we should.
> >
> >> What if the validity of the range
> >> changes between the time you asked and when you actually do the
> >> access?
> >
> > If that's a concern, you shouldn't use this API, you should just do the
> > access and rely on the return value of address_space_rw & friends.
> 
> So when *is* it a good idea to use this API? In real
> hardware you don't usually get a "tell me whether this
> access would succeed if I did it" bus operation -- you
> just do the operation and the memory transaction either
> succeeds or it doesn't. Are we modelling something that
> really exists in hardware on spapr here?

So, as a general rule, you should just attempt the access and handle
failures - this is a bad interface.  The reason I added it, however,
is that the PAPR specification mandates that the virtual LAN pre-check
various buffers when they're registered, and return specific errors if
they're not mapped to valid memory.  Since we have nothing to read or
write at that point, adding this interface was the only way I could
see to implement that requirement.

Or... a bit more charitably: You should always handle failures at the
point of read or write, but using this interface can give you an
earlier, and therefore potentially easier to analyze, error in the
more common failure cases, even if there are more complex cases where
the pre-check succeeds but the read/write still fails later.
Peter Maydell May 25, 2013, 9:23 a.m. UTC | #16
On 25 May 2013 04:44, David Gibson <david@gibson.dropbear.id.au> wrote:
> On Fri, May 24, 2013 at 11:52:17AM +0100, Peter Maydell wrote:
>> So when *is* it a good idea to use this API? In real
>> hardware you don't usually get a "tell me whether this
>> access would succeed if I did it" bus operation -- you
>> just do the operation and the memory transaction either
>> succeeds or it doesn't. Are we modelling something that
>> really exists in hardware on spapr here?
>
> So, as a general rule, you should just attempt the access and handle
> failures - this is a bad interface.  The reason I added it, however,
> is that the PAPR specification mandates that the virtual LAN pre-check
> various buffers when they're registered, and return specific errors if
> they're not mapped to valid memory.  Since we have nothing to read or
> write at that point, adding this interface was the only way I could
> see to implement that requirement.

Would it work to just read and throw away the result of the read?

> Or... a bit more charitably: You should always handle failures at the
> point of read or write, but using this interface can give you an
> earlier, and therefore potentially easier to analyze, error in the
> more common failure cases, even if there are more complex cases where
> the pre-check succeeds but the read/write still fails later.

There's also the converse case where the pre-check fails but
doing the operation at the proper time would succeed, in
which case where we're modelling real hardware we would
be doing it wrong. So the set of cases where it's OK to
pre-check seems a bit limited.

thanks
-- PMM
David Gibson May 26, 2013, 1:02 p.m. UTC | #17
On Sat, May 25, 2013 at 10:23:32AM +0100, Peter Maydell wrote:
> On 25 May 2013 04:44, David Gibson <david@gibson.dropbear.id.au> wrote:
> > On Fri, May 24, 2013 at 11:52:17AM +0100, Peter Maydell wrote:
> >> So when *is* it a good idea to use this API? In real
> >> hardware you don't usually get a "tell me whether this
> >> access would succeed if I did it" bus operation -- you
> >> just do the operation and the memory transaction either
> >> succeeds or it doesn't. Are we modelling something that
> >> really exists in hardware on spapr here?
> >
> > So, as a general rule, you should just attempt the access and handle
> > failures - this is a bad interface.  The reason I added it, however,
> > is that the PAPR specification mandates that the virtual LAN pre-check
> > various buffers when they're registered, and return specific errors if
> > they're not mapped to valid memory.  Since we have nothing to read or
> > write at that point, adding this interface was the only way I could
> > see to implement that requirement.
> 
> Would it work to just read and throw away the result of the read?

Not when checking for writability.

> > Or... a bit more charitably: You should always handle failures at the
> > point of read or write, but using this interface can give you an
> > earlier, and therefore potentially easier to analyze, error in the
> > more common failure cases, even if there are more complex cases where
> > the pre-check succeeds but the read/write still fails later.
> 
> There's also the converse case where the pre-check fails but
> doing the operation at the proper time would succeed, in
> which case where we're modelling real hardware we would
> be doing it wrong. So the set of cases where it's OK to
> pre-check seems a bit limited.

Whether originally real or otherwise, this is a question of faithfully
implementing what the hardware is supposed to do.  In the case of the
PAPR llan, the (virtual) hardware specification says that the buffer
accessibility is checked at buffer add time, and the driver won't work
if it only makes the buffers accessible between that point and the
actual buffer access.  It would be entirely possible to design real
hardware with similar behaviour (probe buffer accessibility when a
descriptor is added to a pool, say), although it's not the sort of
thing hardware people generally do.

So, certainly qemu should not go using this pre-check when the
hardware it's emulating does not do such a check.  Designing hardware
that does do a pre-check is not, IMO, a good idea on balance, but the
point is that there is a reason (albeit, not a great one) you might
want to design the (possibly virtual) hardware that way.

As an aside, when you consider devices with embedded firmware - which
is practically everything these days - the distinction between "real"
hardware and virtual hardware can get kind of blurry.
diff mbox

Patch

diff --git a/dma-helpers.c b/dma-helpers.c
index 272632f..2962b69 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -298,6 +298,11 @@  bool iommu_dma_memory_valid(DMAContext *dma, dma_addr_t addr, dma_addr_t len,
             plen = len;
         }
 
+        if (!address_space_valid(dma->as, paddr, len,
+                                 dir == DMA_DIRECTION_FROM_DEVICE)) {
+            return false;
+        }
+
         len -= plen;
         addr += plen;
     }
diff --git a/exec.c b/exec.c
index 8d91221..8f1b507 100644
--- a/exec.c
+++ b/exec.c
@@ -2079,6 +2079,31 @@  static void cpu_notify_map_clients(void)
     }
 }
 
+bool address_space_valid(AddressSpace *as, hwaddr addr, int len, bool is_write)
+{
+    AddressSpaceDispatch *d = as->dispatch;
+    MemoryRegionSection *section;
+    int l;
+    hwaddr page;
+
+    while (len > 0) {
+        page = addr & TARGET_PAGE_MASK;
+        l = (page + TARGET_PAGE_SIZE) - addr;
+        if (l > len) {
+            l = len;
+        }
+        section = phys_page_find(d, addr >> TARGET_PAGE_BITS);
+        if (section->mr == &io_mem_unassigned ||
+            (is_write && section->mr->readonly)) {
+            return false;
+        }
+
+        len -= l;
+        addr += l;
+    }
+    return true;
+}
+
 /* Map a physical memory region into a host virtual address.
  * May map a subset of the requested range, given by and returned in *plen.
  * May return NULL if resources needed to perform the mapping are exhausted.
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 6ed593c..2e5fd11 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -860,6 +860,20 @@  void address_space_write(AddressSpace *as, hwaddr addr,
  */
 void address_space_read(AddressSpace *as, hwaddr addr, uint8_t *buf, int len);
 
+/* address_space_valid: check for validity of an address space range
+ *
+ * Check whether memory is assigned to the given address space range.
+ *
+ * For now, addr and len should be aligned to a page size.  This limitation
+ * will be lifted in the future.
+ *
+ * @as: #AddressSpace to be accessed
+ * @addr: address within that address space
+ * @len: length of the area to be checked
+ * @is_write: indicates the transfer direction
+ */
+bool address_space_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
+
 /* address_space_map: map a physical memory region into a host virtual address
  *
  * May map a subset of the requested range, given by and returned in @plen.
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
index a52c93a..2e239dc 100644
--- a/include/sysemu/dma.h
+++ b/include/sysemu/dma.h
@@ -113,7 +113,8 @@  static inline bool dma_memory_valid(DMAContext *dma,
                                     DMADirection dir)
 {
     if (!dma_has_iommu(dma)) {
-        return true;
+        return address_space_valid(dma->as, addr, len,
+                                   dir == DMA_DIRECTION_FROM_DEVICE);
     } else {
         return iommu_dma_memory_valid(dma, addr, len, dir);
     }