Message ID | 1447780843-9223-3-git-send-email-gkulkarni@caviumnetworks.com |
---|---|
State | Superseded, archived |
Headers | show |
Hi, On Tue, Nov 17, 2015 at 10:50:41PM +0530, Ganapatrao Kulkarni wrote: > DT bindings for numa mapping of memory, cores and IOs. > > Reviewed-by: Robert Richter <rrichter@cavium.com> > Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Overall this looks good to me. However, I have a couple of concerns. > --- > Documentation/devicetree/bindings/arm/numa.txt | 272 +++++++++++++++++++++++++ > 1 file changed, 272 insertions(+) > create mode 100644 Documentation/devicetree/bindings/arm/numa.txt > > diff --git a/Documentation/devicetree/bindings/arm/numa.txt b/Documentation/devicetree/bindings/arm/numa.txt > new file mode 100644 > index 0000000..b87bf4f > --- /dev/null > +++ b/Documentation/devicetree/bindings/arm/numa.txt > @@ -0,0 +1,272 @@ > +============================================================================== > +NUMA binding description. > +============================================================================== > + > +============================================================================== > +1 - Introduction > +============================================================================== > + > +Systems employing a Non Uniform Memory Access (NUMA) architecture contain > +collections of hardware resources including processors, memory, and I/O buses, > +that comprise what is commonly known as a NUMA node. > +Processor accesses to memory within the local NUMA node is generally faster > +than processor accesses to memory outside of the local NUMA node. > +DT defines interfaces that allow the platform to convey NUMA node > +topology information to OS. > + > +============================================================================== > +2 - numa-node-id > +============================================================================== > +The device node property numa-node-id describes numa domains within a > +machine. This property can be used in device nodes like cpu, memory, bus and > +devices to map to respective numa nodes. > + > +numa-node-id property is a 32-bit integer which defines numa node id to which > +this device node has numa domain association. I'd prefer if the above two paragraphs were replaced with: For the purpose of identification, each NUMA node is associated with a unique token known as a node id. For the purpose of this binding a node id is a 32-bit integer. A device node is associated with a NUMA node by the presence of a numa-node-id property which contains the node id of the device. > + > +Example: > + /* numa node 0 */ > + numa-node-id = <0>; > + > + /* numa node 1 */ > + numa-node-id = <1>; > + > +============================================================================== > +3 - distance-map > +============================================================================== > + > +The device tree node distance-map describes the relative > +distance (memory latency) between all numa nodes. Is this not a combined approximation for latency and bandwidth? > +- compatible : Should at least contain "numa,distance-map-v1". Please use "numa-distance-map-v1", as "numa" is not a vendor. > +- distance-matrix > + This property defines a matrix to describe the relative distances > + between all numa nodes. > + It is represented as a list of node pairs and their relative distance. > + > + Note: > + 1. Each entry represents distance from first node to second node. > + 2. If both directions between 2 nodes have the same distance, only > + one entry is required. I still don't understand what direction means in this context. Are there systems (of any architecture) which don't have symmetric distances? Which accesses does this apply differently to? Given that, I think that it might be best to explicitly call out distances as being equal, and leave any directionality for a later revision of the binding when we have some semantics for directionality. > + 2. distance-matrix shold have entries in lexicographical ascending order of nodes. > + 3. There must be only one Device node distance-map and must reside in the root node. > + > +Example: > + 4 nodes connected in mesh/ring topology as below, > + > + 0_______20______1 > + | | > + | | > + 20| |20 > + | | > + | | > + |_______________| > + 3 20 2 > + > + if relative distance for each hop is 20, > + then inter node distance would be for this topology will be, > + 0 -> 1 = 20 > + 1 -> 2 = 20 > + 2 -> 3 = 20 > + 3 -> 0 = 20 > + 0 -> 2 = 40 > + 1 -> 3 = 40 How is this scaled relative to a local access? Do we assume that a local access has value 1, e.g. each hop takes 20x a local access in this example? Do we need a finer-grained scale (e.g. to allow us to represent a distance of 2.5)? The ACPI SLIT spec seems to give local accesses a value 10 implicitly to this end. Other than those points, I'm happy with this binding. Thanks, Mark. -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Dec 11, 2015 at 7:23 PM, Mark Rutland <mark.rutland@arm.com> wrote: > Hi, > > On Tue, Nov 17, 2015 at 10:50:41PM +0530, Ganapatrao Kulkarni wrote: >> DT bindings for numa mapping of memory, cores and IOs. >> >> Reviewed-by: Robert Richter <rrichter@cavium.com> >> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> > > Overall this looks good to me. However, I have a couple of concerns. thanks. > >> --- >> Documentation/devicetree/bindings/arm/numa.txt | 272 +++++++++++++++++++++++++ >> 1 file changed, 272 insertions(+) >> create mode 100644 Documentation/devicetree/bindings/arm/numa.txt >> >> diff --git a/Documentation/devicetree/bindings/arm/numa.txt b/Documentation/devicetree/bindings/arm/numa.txt >> new file mode 100644 >> index 0000000..b87bf4f >> --- /dev/null >> +++ b/Documentation/devicetree/bindings/arm/numa.txt >> @@ -0,0 +1,272 @@ >> +============================================================================== >> +NUMA binding description. >> +============================================================================== >> + >> +============================================================================== >> +1 - Introduction >> +============================================================================== >> + >> +Systems employing a Non Uniform Memory Access (NUMA) architecture contain >> +collections of hardware resources including processors, memory, and I/O buses, >> +that comprise what is commonly known as a NUMA node. >> +Processor accesses to memory within the local NUMA node is generally faster >> +than processor accesses to memory outside of the local NUMA node. >> +DT defines interfaces that allow the platform to convey NUMA node >> +topology information to OS. >> + >> +============================================================================== >> +2 - numa-node-id >> +============================================================================== >> +The device node property numa-node-id describes numa domains within a >> +machine. This property can be used in device nodes like cpu, memory, bus and >> +devices to map to respective numa nodes. >> + >> +numa-node-id property is a 32-bit integer which defines numa node id to which >> +this device node has numa domain association. > > I'd prefer if the above two paragraphs were replaced with: > > For the purpose of identification, each NUMA node is associated > with a unique token known as a node id. For the purpose of this > binding a node id is a 32-bit integer. > > A device node is associated with a NUMA node by the presence of > a numa-node-id property which contains the node id of the > device. ok, will do. > >> + >> +Example: >> + /* numa node 0 */ >> + numa-node-id = <0>; >> + >> + /* numa node 1 */ >> + numa-node-id = <1>; >> + >> +============================================================================== >> +3 - distance-map >> +============================================================================== >> + >> +The device tree node distance-map describes the relative >> +distance (memory latency) between all numa nodes. > > Is this not a combined approximation for latency and bandwidth? AFAIK, it is to represent inter-node memory access latency. > >> +- compatible : Should at least contain "numa,distance-map-v1". > > Please use "numa-distance-map-v1", as "numa" is not a vendor. ok > >> +- distance-matrix >> + This property defines a matrix to describe the relative distances >> + between all numa nodes. >> + It is represented as a list of node pairs and their relative distance. >> + >> + Note: >> + 1. Each entry represents distance from first node to second node. >> + 2. If both directions between 2 nodes have the same distance, only >> + one entry is required. > > I still don't understand what direction means in this context. Are there > systems (of any architecture) which don't have symmetric distances? > Which accesses does this apply differently to? > > Given that, I think that it might be best to explicitly call out > distances as being equal, and leave any directionality for a later > revision of the binding when we have some semantics for directionality. agreed, given that there is no know system to substantiate dual direction, let us not explicit about direction. > >> + 2. distance-matrix shold have entries in lexicographical ascending order of nodes. >> + 3. There must be only one Device node distance-map and must reside in the root node. >> + >> +Example: >> + 4 nodes connected in mesh/ring topology as below, >> + >> + 0_______20______1 >> + | | >> + | | >> + 20| |20 >> + | | >> + | | >> + |_______________| >> + 3 20 2 >> + >> + if relative distance for each hop is 20, >> + then inter node distance would be for this topology will be, >> + 0 -> 1 = 20 >> + 1 -> 2 = 20 >> + 2 -> 3 = 20 >> + 3 -> 0 = 20 >> + 0 -> 2 = 40 >> + 1 -> 3 = 40 > > How is this scaled relative to a local access? this is based on representing local distance with 10 and all inter-node latency being represented as multiple of 10. > > Do we assume that a local access has value 1, e.g. each hop takes 20x a > local access in this example? The local distance is represented as 10, this is fixed and same as in ACPI. Inter-node distance can be any number greater than 10. this information can be added here to make it clear. > > Do we need a finer-grained scale (e.g. to allow us to represent a > distance of 2.5)? The ACPI SLIT spec seems to give local accesses a > value 10 implicitly to this end. yes, same as ACPI, local node is 10. > > Other than those points, I'm happy with this binding. > > Thanks, > Mark. > thanks Ganapat -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi, On Fri, Dec 11, 2015 at 08:11:07PM +0530, Ganapatrao Kulkarni wrote: > On Fri, Dec 11, 2015 at 7:23 PM, Mark Rutland <mark.rutland@arm.com> wrote: > > Hi, > > > > On Tue, Nov 17, 2015 at 10:50:41PM +0530, Ganapatrao Kulkarni wrote: > >> DT bindings for numa mapping of memory, cores and IOs. > >> > >> Reviewed-by: Robert Richter <rrichter@cavium.com> > >> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> > > > > Overall this looks good to me. However, I have a couple of concerns. > thanks. [...] > >> +============================================================================== > >> +2 - numa-node-id > >> +============================================================================== > >> +The device node property numa-node-id describes numa domains within a > >> +machine. This property can be used in device nodes like cpu, memory, bus and > >> +devices to map to respective numa nodes. > >> + > >> +numa-node-id property is a 32-bit integer which defines numa node id to which > >> +this device node has numa domain association. > > > > I'd prefer if the above two paragraphs were replaced with: > > > > For the purpose of identification, each NUMA node is associated > > with a unique token known as a node id. For the purpose of this > > binding a node id is a 32-bit integer. > > > > A device node is associated with a NUMA node by the presence of > > a numa-node-id property which contains the node id of the > > device. > ok, will do. [...] > >> +============================================================================== > >> +3 - distance-map > >> +============================================================================== > >> + > >> +The device tree node distance-map describes the relative > >> +distance (memory latency) between all numa nodes. > > > > Is this not a combined approximation for latency and bandwidth? > AFAIK, it is to represent inter-node memory access latency. > > > >> +- compatible : Should at least contain "numa,distance-map-v1". > > > > Please use "numa-distance-map-v1", as "numa" is not a vendor. > ok > > > >> +- distance-matrix > >> + This property defines a matrix to describe the relative distances > >> + between all numa nodes. > >> + It is represented as a list of node pairs and their relative distance. > >> + > >> + Note: > >> + 1. Each entry represents distance from first node to second node. > >> + 2. If both directions between 2 nodes have the same distance, only > >> + one entry is required. > > > > I still don't understand what direction means in this context. Are there > > systems (of any architecture) which don't have symmetric distances? > > Which accesses does this apply differently to? > > > > Given that, I think that it might be best to explicitly call out > > distances as being equal, and leave any directionality for a later > > revision of the binding when we have some semantics for directionality. > agreed, given that there is no know system to substantiate dual direction, > let us not explicit about direction. Regarding your comment in [1], I was expecting a respin of this series with the above comments addressed. I will not provide an ack until I've seen that. Additional concerns below also apply. > >> + 2. distance-matrix shold have entries in lexicographical ascending order of nodes. > >> + 3. There must be only one Device node distance-map and must reside in the root node. > >> + > >> +Example: > >> + 4 nodes connected in mesh/ring topology as below, > >> + > >> + 0_______20______1 > >> + | | > >> + | | > >> + 20| |20 > >> + | | > >> + | | > >> + |_______________| > >> + 3 20 2 > >> + > >> + if relative distance for each hop is 20, > >> + then inter node distance would be for this topology will be, > >> + 0 -> 1 = 20 > >> + 1 -> 2 = 20 > >> + 2 -> 3 = 20 > >> + 3 -> 0 = 20 > >> + 0 -> 2 = 40 > >> + 1 -> 3 = 40 > > > > How is this scaled relative to a local access? > this is based on representing local distance with 10 and > all inter-node latency being represented as multiple of 10. > > > > > Do we assume that a local access has value 1, e.g. each hop takes 20x a > > local access in this example? > The local distance is represented as 10, this is fixed and same as in ACPI. > Inter-node distance can be any number greater than 10. > this information can be added here to make it clear. This seems rather arbitrary. Why can we not define the local distance in the DT? I appreciate that the value is hard-coded for ACPI, but we don't have to copy that limitation. I'm not sure if asymmetric local distances matter. Thanks, Mark. [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-December/394634.html -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Dec 18, 2015 at 12:37 AM, Mark Rutland <mark.rutland@arm.com> wrote: > Hi, > > On Fri, Dec 11, 2015 at 08:11:07PM +0530, Ganapatrao Kulkarni wrote: >> On Fri, Dec 11, 2015 at 7:23 PM, Mark Rutland <mark.rutland@arm.com> wrote: >> > Hi, >> > >> > On Tue, Nov 17, 2015 at 10:50:41PM +0530, Ganapatrao Kulkarni wrote: >> >> DT bindings for numa mapping of memory, cores and IOs. >> >> >> >> Reviewed-by: Robert Richter <rrichter@cavium.com> >> >> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> >> > >> > Overall this looks good to me. However, I have a couple of concerns. >> thanks. > > [...] > >> >> +============================================================================== >> >> +2 - numa-node-id >> >> +============================================================================== >> >> +The device node property numa-node-id describes numa domains within a >> >> +machine. This property can be used in device nodes like cpu, memory, bus and >> >> +devices to map to respective numa nodes. >> >> + >> >> +numa-node-id property is a 32-bit integer which defines numa node id to which >> >> +this device node has numa domain association. >> > >> > I'd prefer if the above two paragraphs were replaced with: >> > >> > For the purpose of identification, each NUMA node is associated >> > with a unique token known as a node id. For the purpose of this >> > binding a node id is a 32-bit integer. >> > >> > A device node is associated with a NUMA node by the presence of >> > a numa-node-id property which contains the node id of the >> > device. >> ok, will do. > > [...] > >> >> +============================================================================== >> >> +3 - distance-map >> >> +============================================================================== >> >> + >> >> +The device tree node distance-map describes the relative >> >> +distance (memory latency) between all numa nodes. >> > >> > Is this not a combined approximation for latency and bandwidth? >> AFAIK, it is to represent inter-node memory access latency. >> > >> >> +- compatible : Should at least contain "numa,distance-map-v1". >> > >> > Please use "numa-distance-map-v1", as "numa" is not a vendor. >> ok >> > >> >> +- distance-matrix >> >> + This property defines a matrix to describe the relative distances >> >> + between all numa nodes. >> >> + It is represented as a list of node pairs and their relative distance. >> >> + >> >> + Note: >> >> + 1. Each entry represents distance from first node to second node. >> >> + 2. If both directions between 2 nodes have the same distance, only >> >> + one entry is required. >> > >> > I still don't understand what direction means in this context. Are there >> > systems (of any architecture) which don't have symmetric distances? >> > Which accesses does this apply differently to? >> > >> > Given that, I think that it might be best to explicitly call out >> > distances as being equal, and leave any directionality for a later >> > revision of the binding when we have some semantics for directionality. >> agreed, given that there is no know system to substantiate dual direction, >> let us not explicit about direction. > > Regarding your comment in [1], I was expecting a respin of this series > with the above comments addressed. I will not provide an ack until I've > seen that. sure, i will respin with the comments addressed. > > Additional concerns below also apply. > >> >> + 2. distance-matrix shold have entries in lexicographical ascending order of nodes. >> >> + 3. There must be only one Device node distance-map and must reside in the root node. >> >> + >> >> +Example: >> >> + 4 nodes connected in mesh/ring topology as below, >> >> + >> >> + 0_______20______1 >> >> + | | >> >> + | | >> >> + 20| |20 >> >> + | | >> >> + | | >> >> + |_______________| >> >> + 3 20 2 >> >> + >> >> + if relative distance for each hop is 20, >> >> + then inter node distance would be for this topology will be, >> >> + 0 -> 1 = 20 >> >> + 1 -> 2 = 20 >> >> + 2 -> 3 = 20 >> >> + 3 -> 0 = 20 >> >> + 0 -> 2 = 40 >> >> + 1 -> 3 = 40 >> > >> > How is this scaled relative to a local access? >> this is based on representing local distance with 10 and >> all inter-node latency being represented as multiple of 10. >> >> > >> > Do we assume that a local access has value 1, e.g. each hop takes 20x a >> > local access in this example? >> The local distance is represented as 10, this is fixed and same as in ACPI. >> Inter-node distance can be any number greater than 10. >> this information can be added here to make it clear. > > This seems rather arbitrary. > > Why can we not define the local distance in the DT? I appreciate that > the value is hard-coded for ACPI, but we don't have to copy that > limitation. yes, we can mention local distance. > > I'm not sure if asymmetric local distances matter. > > Thanks, > Mark. > > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-December/394634.html thanks Ganapat -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/Documentation/devicetree/bindings/arm/numa.txt b/Documentation/devicetree/bindings/arm/numa.txt new file mode 100644 index 0000000..b87bf4f --- /dev/null +++ b/Documentation/devicetree/bindings/arm/numa.txt @@ -0,0 +1,272 @@ +============================================================================== +NUMA binding description. +============================================================================== + +============================================================================== +1 - Introduction +============================================================================== + +Systems employing a Non Uniform Memory Access (NUMA) architecture contain +collections of hardware resources including processors, memory, and I/O buses, +that comprise what is commonly known as a NUMA node. +Processor accesses to memory within the local NUMA node is generally faster +than processor accesses to memory outside of the local NUMA node. +DT defines interfaces that allow the platform to convey NUMA node +topology information to OS. + +============================================================================== +2 - numa-node-id +============================================================================== +The device node property numa-node-id describes numa domains within a +machine. This property can be used in device nodes like cpu, memory, bus and +devices to map to respective numa nodes. + +numa-node-id property is a 32-bit integer which defines numa node id to which +this device node has numa domain association. + +Example: + /* numa node 0 */ + numa-node-id = <0>; + + /* numa node 1 */ + numa-node-id = <1>; + +============================================================================== +3 - distance-map +============================================================================== + +The device tree node distance-map describes the relative +distance (memory latency) between all numa nodes. + +- compatible : Should at least contain "numa,distance-map-v1". + +- distance-matrix + This property defines a matrix to describe the relative distances + between all numa nodes. + It is represented as a list of node pairs and their relative distance. + + Note: + 1. Each entry represents distance from first node to second node. + 2. If both directions between 2 nodes have the same distance, only + one entry is required. + 2. distance-matrix shold have entries in lexicographical ascending order of nodes. + 3. There must be only one Device node distance-map and must reside in the root node. + +Example: + 4 nodes connected in mesh/ring topology as below, + + 0_______20______1 + | | + | | + 20| |20 + | | + | | + |_______________| + 3 20 2 + + if relative distance for each hop is 20, + then inter node distance would be for this topology will be, + 0 -> 1 = 20 + 1 -> 2 = 20 + 2 -> 3 = 20 + 3 -> 0 = 20 + 0 -> 2 = 40 + 1 -> 3 = 40 + + and dt presentation for this distance matrix is, + + distance-map { + compatible = "numa,distance-map-v1"; + distance-matrix = <0 0 10>, + <0 1 20>, + <0 2 40>, + <0 3 20>, + <1 0 20>, + <1 1 10>, + <1 2 20>, + <1 3 40>, + <2 0 40>, + <2 1 20>, + <2 2 10>, + <2 3 20>, + <3 0 20>, + <3 1 40>, + <3 2 20>, + <3 3 10>; + }; + +Note: + 1. The entries like <1 0> can be optional if <0 1> and <1 0> + are of same distance. + +============================================================================== +4 - Example dts +============================================================================== + +2 sockets system consists of 2 boards connected through ccn bus and +each board having one socket/soc of 8 cpus, memory and pci bus. + + memory@00c00000 { + device_type = "memory"; + reg = <0x0 0x00c00000 0x0 0x80000000>; + /* node 0 */ + numa-node-id = <0>; + }; + + memory@10000000000 { + device_type = "memory"; + reg = <0x100 0x00000000 0x0 0x80000000>; + /* node 1 */ + numa-node-id = <1>; + }; + + cpus { + #address-cells = <2>; + #size-cells = <0>; + + cpu@000 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x000>; + enable-method = "psci"; + /* node 0 */ + numa-node-id = <0>; + }; + cpu@001 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x001>; + enable-method = "psci"; + numa-node-id = <0>; + }; + cpu@002 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x002>; + enable-method = "psci"; + numa-node-id = <0>; + }; + cpu@003 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x003>; + enable-method = "psci"; + numa-node-id = <0>; + }; + cpu@004 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x004>; + enable-method = "psci"; + numa-node-id = <0>; + }; + cpu@005 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x005>; + enable-method = "psci"; + numa-node-id = <0>; + }; + cpu@006 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x006>; + enable-method = "psci"; + numa-node-id = <0>; + }; + cpu@007 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x007>; + enable-method = "psci"; + numa-node-id = <0>; + }; + cpu@008 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x008>; + enable-method = "psci"; + /* node 1 */ + numa-node-id = <1>; + }; + cpu@009 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x009>; + enable-method = "psci"; + numa-node-id = <1>; + }; + cpu@00a { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x00a>; + enable-method = "psci"; + numa-node-id = <1>; + }; + cpu@00b { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x00b>; + enable-method = "psci"; + numa-node-id = <1>; + }; + cpu@00c { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x00c>; + enable-method = "psci"; + numa-node-id = <1>; + }; + cpu@00d { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x00d>; + enable-method = "psci"; + numa-node-id = <1>; + }; + cpu@00e { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x00e>; + enable-method = "psci"; + numa-node-id = <1>; + }; + cpu@00f { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0x0 0x00f>; + enable-method = "psci"; + numa-node-id = <1>; + }; + }; + + pcie0: pcie0@0x8480,00000000 { + compatible = "arm,armv8"; + device_type = "pci"; + bus-range = <0 255>; + #size-cells = <2>; + #address-cells = <3>; + reg = <0x8480 0x00000000 0 0x10000000>; /* Configuration space */ + ranges = <0x03000000 0x8010 0x00000000 0x8010 0x00000000 0x70 0x00000000>; + /* node 0 */ + numa-node-id = <0>; + }; + + pcie1: pcie1@0x9480,00000000 { + compatible = "arm,armv8"; + device_type = "pci"; + bus-range = <0 255>; + #size-cells = <2>; + #address-cells = <3>; + reg = <0x9480 0x00000000 0 0x10000000>; /* Configuration space */ + ranges = <0x03000000 0x9010 0x00000000 0x9010 0x00000000 0x70 0x00000000>; + /* node 1 */ + numa-node-id = <1>; + }; + + distance-map { + compatible = "numa,distance-map-v1"; + distance-matrix = <0 0 10>, + <0 1 20>, + <1 1 10>; + };