diff mbox series

[for-5.0,v11,05/20] virtio-iommu: Endpoint and domains structs and helpers

Message ID 20191122182943.4656-6-eric.auger@redhat.com
State New
Headers show
Series VIRTIO-IOMMU device | expand

Commit Message

Eric Auger Nov. 22, 2019, 6:29 p.m. UTC
This patch introduce domain and endpoint internal
datatypes. Both are stored in RB trees. The domain
owns a list of endpoints attached to it.

Helpers to get/put end points and domains are introduced.
get() helpers will become static in subsequent patches.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v10 -> v11:
- fixed interval_cmp (<= -> < and >= -> >)
- removed unused viommu field from endpoint
- removed Bharat's R-b

v9 -> v10:
- added Bharat's R-b

v6 -> v7:
- on virtio_iommu_find_add_as the bus number computation may
  not be finalized yet so we cannot register the EPs at that time.
  Hence, let's remove the get_endpoint and also do not use the
  bus number for building the memory region name string (only
  used for debug though).

v4 -> v5:
- initialize as->endpoint_list

v3 -> v4:
- new separate patch
---
 hw/virtio/trace-events   |   4 ++
 hw/virtio/virtio-iommu.c | 117 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 121 insertions(+)

Comments

Jean-Philippe Brucker Dec. 10, 2019, 4:37 p.m. UTC | #1
On Fri, Nov 22, 2019 at 07:29:28PM +0100, Eric Auger wrote:
> +typedef struct viommu_domain {
> +    uint32_t id;
> +    GTree *mappings;
> +    QLIST_HEAD(, viommu_endpoint) endpoint_list;
> +} viommu_domain;
> +
> +typedef struct viommu_endpoint {
> +    uint32_t id;
> +    viommu_domain *domain;
> +    QLIST_ENTRY(viommu_endpoint) next;
> +} viommu_endpoint;

There might be a way to merge viommu_endpoint and the IOMMUDevice
structure introduced in patch 4, since they both represent one endpoint.
Maybe virtio_iommu_find_add_pci_as() could add the IOMMUDevice to
s->endpoints, and IOMMUDevice could store the endpoint ID rather than bus
and devfn.

> +typedef struct viommu_interval {
> +    uint64_t low;
> +    uint64_t high;
> +} viommu_interval;

I guess these should be named in CamelCase? Although if we're allowed to
choose my vote goes to underscores :)

Thanks,
Jean
Eric Auger Dec. 19, 2019, 6:31 p.m. UTC | #2
Hi Jean,

On 12/10/19 5:37 PM, Jean-Philippe Brucker wrote:
> On Fri, Nov 22, 2019 at 07:29:28PM +0100, Eric Auger wrote:
>> +typedef struct viommu_domain {
>> +    uint32_t id;
>> +    GTree *mappings;
>> +    QLIST_HEAD(, viommu_endpoint) endpoint_list;
>> +} viommu_domain;
>> +
>> +typedef struct viommu_endpoint {
>> +    uint32_t id;
>> +    viommu_domain *domain;
>> +    QLIST_ENTRY(viommu_endpoint) next;
>> +} viommu_endpoint;
> 
> There might be a way to merge viommu_endpoint and the IOMMUDevice
> structure introduced in patch 4, since they both represent one endpoint.
> Maybe virtio_iommu_find_add_pci_as() could add the IOMMUDevice to
> s->endpoints, and IOMMUDevice could store the endpoint ID rather than bus
> and devfn.

On PCI bus enumeration we locally store the PCI bus hierarchy under the
form of GHashTable of IOMMUDevice indexed by iommu_pci_bus pointer.
Those are all the devices attached to the downstream buses. We also use
an array of iommu pci bus pointers indexed by bus number that is lazily
populated due to the fact, at enumeration time we do know the bus number
yet. As you pointed, I haven't used the array of iommu pci bus pointers
indexed by bus number in this series and I should actually. Currently I
am not checking on attach that the sid effectively corresponds to a sid
protected by this iommu. I will add this in my next version. The above
structures are used in intel_iommu and smmu code as well and I think
eventually this may be factorized a common base class..

on the other hand the gtree of viommu_endpoint - soon renamed in
CamelCase form ;-) - corresponds to the EPs that are actually attached
to any domain. It is indexed by sid and not by bus pointer. This is more
adapted to the virtio-iommu case.

So, despite your suggestion, I am tempted to keep the different
structures as the first ones are common to all iommu emulation code and
the last is adapted to the virtio-iommu operations.

Thoughts?

Eric

> 
>> +typedef struct viommu_interval {
>> +    uint64_t low;
>> +    uint64_t high;
>> +} viommu_interval;
> 
> I guess these should be named in CamelCase? Although if we're allowed to
> choose my vote goes to underscores :)
> 
> Thanks,
> Jean
>
Jean-Philippe Brucker Dec. 20, 2019, 5 p.m. UTC | #3
On Thu, Dec 19, 2019 at 07:31:08PM +0100, Auger Eric wrote:
> Hi Jean,
> 
> On 12/10/19 5:37 PM, Jean-Philippe Brucker wrote:
> > On Fri, Nov 22, 2019 at 07:29:28PM +0100, Eric Auger wrote:
> >> +typedef struct viommu_domain {
> >> +    uint32_t id;
> >> +    GTree *mappings;
> >> +    QLIST_HEAD(, viommu_endpoint) endpoint_list;
> >> +} viommu_domain;
> >> +
> >> +typedef struct viommu_endpoint {
> >> +    uint32_t id;
> >> +    viommu_domain *domain;
> >> +    QLIST_ENTRY(viommu_endpoint) next;
> >> +} viommu_endpoint;
> > 
> > There might be a way to merge viommu_endpoint and the IOMMUDevice
> > structure introduced in patch 4, since they both represent one endpoint.
> > Maybe virtio_iommu_find_add_pci_as() could add the IOMMUDevice to
> > s->endpoints, and IOMMUDevice could store the endpoint ID rather than bus
> > and devfn.
> 
> On PCI bus enumeration we locally store the PCI bus hierarchy under the
> form of GHashTable of IOMMUDevice indexed by iommu_pci_bus pointer.
> Those are all the devices attached to the downstream buses. We also use
> an array of iommu pci bus pointers indexed by bus number that is lazily
> populated due to the fact, at enumeration time we do know the bus number
> yet. As you pointed, I haven't used the array of iommu pci bus pointers
> indexed by bus number in this series and I should actually. Currently I
> am not checking on attach that the sid effectively corresponds to a sid
> protected by this iommu. I will add this in my next version. The above
> structures are used in intel_iommu and smmu code as well and I think
> eventually this may be factorized a common base class..
> 
> on the other hand the gtree of viommu_endpoint - soon renamed in
> CamelCase form ;-) - corresponds to the EPs that are actually attached
> to any domain. It is indexed by sid and not by bus pointer. This is more
> adapted to the virtio-iommu case.
> 
> So, despite your suggestion, I am tempted to keep the different
> structures as the first ones are common to all iommu emulation code and
> the last is adapted to the virtio-iommu operations.
> 
> Thoughts?

Makes sense, it seems better to keep them separate. I had missed that the
PCI bus number is resolved later, and started to move the endpoint ID into
IOMMUDevice when adding MMIO support, but I'll need to revisit this.

I'll be off for two weeks, have a nice holiday!

Thanks,
Jean
Eric Auger Dec. 23, 2019, 9:11 a.m. UTC | #4
Hi Jean,

On 12/20/19 6:00 PM, Jean-Philippe Brucker wrote:
> On Thu, Dec 19, 2019 at 07:31:08PM +0100, Auger Eric wrote:
>> Hi Jean,
>>
>> On 12/10/19 5:37 PM, Jean-Philippe Brucker wrote:
>>> On Fri, Nov 22, 2019 at 07:29:28PM +0100, Eric Auger wrote:
>>>> +typedef struct viommu_domain {
>>>> +    uint32_t id;
>>>> +    GTree *mappings;
>>>> +    QLIST_HEAD(, viommu_endpoint) endpoint_list;
>>>> +} viommu_domain;
>>>> +
>>>> +typedef struct viommu_endpoint {
>>>> +    uint32_t id;
>>>> +    viommu_domain *domain;
>>>> +    QLIST_ENTRY(viommu_endpoint) next;
>>>> +} viommu_endpoint;
>>>
>>> There might be a way to merge viommu_endpoint and the IOMMUDevice
>>> structure introduced in patch 4, since they both represent one endpoint.
>>> Maybe virtio_iommu_find_add_pci_as() could add the IOMMUDevice to
>>> s->endpoints, and IOMMUDevice could store the endpoint ID rather than bus
>>> and devfn.
>>
>> On PCI bus enumeration we locally store the PCI bus hierarchy under the
>> form of GHashTable of IOMMUDevice indexed by iommu_pci_bus pointer.
>> Those are all the devices attached to the downstream buses. We also use
>> an array of iommu pci bus pointers indexed by bus number that is lazily
>> populated due to the fact, at enumeration time we do know the bus number
>> yet. As you pointed, I haven't used the array of iommu pci bus pointers
>> indexed by bus number in this series and I should actually. Currently I
>> am not checking on attach that the sid effectively corresponds to a sid
>> protected by this iommu. I will add this in my next version. The above
>> structures are used in intel_iommu and smmu code as well and I think
>> eventually this may be factorized a common base class..
>>
>> on the other hand the gtree of viommu_endpoint - soon renamed in
>> CamelCase form ;-) - corresponds to the EPs that are actually attached
>> to any domain. It is indexed by sid and not by bus pointer. This is more
>> adapted to the virtio-iommu case.
>>
>> So, despite your suggestion, I am tempted to keep the different
>> structures as the first ones are common to all iommu emulation code and
>> the last is adapted to the virtio-iommu operations.
>>
>> Thoughts?
> 
> Makes sense, it seems better to keep them separate. I had missed that the
> PCI bus number is resolved later, and started to move the endpoint ID into
> IOMMUDevice when adding MMIO support, but I'll need to revisit this.
> 
> I'll be off for two weeks, have a nice holiday!

Thanks, you too.

Merry Christmas! :-)

Eric
> 
> Thanks,
> Jean
>
diff mbox series

Patch

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index b32169d56c..a373bdebb3 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -67,3 +67,7 @@  virtio_iommu_map(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end, uin
 virtio_iommu_unmap(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
 virtio_iommu_translate(const char *name, uint32_t rid, uint64_t iova, int flag) "mr=%s rid=%d addr=0x%"PRIx64" flag=%d"
 virtio_iommu_init_iommu_mr(char *iommu_mr) "init %s"
+virtio_iommu_get_endpoint(uint32_t ep_id) "Alloc endpoint=%d"
+virtio_iommu_put_endpoint(uint32_t ep_id) "Free endpoint=%d"
+virtio_iommu_get_domain(uint32_t domain_id) "Alloc domain=%d"
+virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d"
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 2d7b1752b7..235bde2203 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -32,15 +32,116 @@ 
 #include "hw/virtio/virtio-bus.h"
 #include "hw/virtio/virtio-access.h"
 #include "hw/virtio/virtio-iommu.h"
+#include "hw/pci/pci_bus.h"
+#include "hw/pci/pci.h"
 
 /* Max size */
 #define VIOMMU_DEFAULT_QUEUE_SIZE 256
 
+typedef struct viommu_domain {
+    uint32_t id;
+    GTree *mappings;
+    QLIST_HEAD(, viommu_endpoint) endpoint_list;
+} viommu_domain;
+
+typedef struct viommu_endpoint {
+    uint32_t id;
+    viommu_domain *domain;
+    QLIST_ENTRY(viommu_endpoint) next;
+} viommu_endpoint;
+
+typedef struct viommu_interval {
+    uint64_t low;
+    uint64_t high;
+} viommu_interval;
+
 static inline uint16_t virtio_iommu_get_sid(IOMMUDevice *dev)
 {
     return PCI_BUILD_BDF(pci_bus_num(dev->bus), dev->devfn);
 }
 
+static gint interval_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
+{
+    viommu_interval *inta = (viommu_interval *)a;
+    viommu_interval *intb = (viommu_interval *)b;
+
+    if (inta->high < intb->low) {
+        return -1;
+    } else if (intb->high < inta->low) {
+        return 1;
+    } else {
+        return 0;
+    }
+}
+
+static void virtio_iommu_detach_endpoint_from_domain(viommu_endpoint *ep)
+{
+    QLIST_REMOVE(ep, next);
+    ep->domain = NULL;
+}
+
+viommu_endpoint *virtio_iommu_get_endpoint(VirtIOIOMMU *s, uint32_t ep_id);
+viommu_endpoint *virtio_iommu_get_endpoint(VirtIOIOMMU *s, uint32_t ep_id)
+{
+    viommu_endpoint *ep;
+
+    ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(ep_id));
+    if (ep) {
+        return ep;
+    }
+    ep = g_malloc0(sizeof(*ep));
+    ep->id = ep_id;
+    trace_virtio_iommu_get_endpoint(ep_id);
+    g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
+    return ep;
+}
+
+static void virtio_iommu_put_endpoint(gpointer data)
+{
+    viommu_endpoint *ep = (viommu_endpoint *)data;
+
+    if (ep->domain) {
+        virtio_iommu_detach_endpoint_from_domain(ep);
+        g_tree_unref(ep->domain->mappings);
+    }
+
+    trace_virtio_iommu_put_endpoint(ep->id);
+    g_free(ep);
+}
+
+viommu_domain *virtio_iommu_get_domain(VirtIOIOMMU *s, uint32_t domain_id);
+viommu_domain *virtio_iommu_get_domain(VirtIOIOMMU *s, uint32_t domain_id)
+{
+    viommu_domain *domain;
+
+    domain = g_tree_lookup(s->domains, GUINT_TO_POINTER(domain_id));
+    if (domain) {
+        return domain;
+    }
+    domain = g_malloc0(sizeof(*domain));
+    domain->id = domain_id;
+    domain->mappings = g_tree_new_full((GCompareDataFunc)interval_cmp,
+                                   NULL, (GDestroyNotify)g_free,
+                                   (GDestroyNotify)g_free);
+    g_tree_insert(s->domains, GUINT_TO_POINTER(domain_id), domain);
+    QLIST_INIT(&domain->endpoint_list);
+    trace_virtio_iommu_get_domain(domain_id);
+    return domain;
+}
+
+static void virtio_iommu_put_domain(gpointer data)
+{
+    viommu_domain *domain = (viommu_domain *)data;
+    viommu_endpoint *iter, *tmp;
+
+    QLIST_FOREACH_SAFE(iter, &domain->endpoint_list, next, tmp) {
+        virtio_iommu_detach_endpoint_from_domain(iter);
+    }
+    g_tree_destroy(domain->mappings);
+    trace_virtio_iommu_put_domain(domain->id);
+    g_free(domain);
+}
+
 static AddressSpace *virtio_iommu_find_add_as(PCIBus *bus, void *opaque,
                                               int devfn)
 {
@@ -293,6 +394,13 @@  static const VMStateDescription vmstate_virtio_iommu_device = {
     .unmigratable = 1,
 };
 
+static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
+{
+    uint ua = GPOINTER_TO_UINT(a);
+    uint ub = GPOINTER_TO_UINT(b);
+    return (ua > ub) - (ua < ub);
+}
+
 static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -328,11 +436,20 @@  static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
     } else {
         error_setg(errp, "VIRTIO-IOMMU is not attached to any PCI bus!");
     }
+
+    s->domains = g_tree_new_full((GCompareDataFunc)int_cmp,
+                                 NULL, NULL, virtio_iommu_put_domain);
+    s->endpoints = g_tree_new_full((GCompareDataFunc)int_cmp,
+                                   NULL, NULL, virtio_iommu_put_endpoint);
 }
 
 static void virtio_iommu_device_unrealize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+    VirtIOIOMMU *s = VIRTIO_IOMMU(dev);
+
+    g_tree_destroy(s->domains);
+    g_tree_destroy(s->endpoints);
 
     virtio_cleanup(vdev);
 }