diff mbox series

[RFC] interconnect: Replace of_icc_get() with icc_get() and reduce DT binding

Message ID 20190925054133.206992-1-swboyd@chromium.org
State RFC, archived
Headers show
Series [RFC] interconnect: Replace of_icc_get() with icc_get() and reduce DT binding | expand

Checks

Context Check Description
robh/checkpatch warning "total: 0 errors, 1 warnings, 59 lines checked"

Commit Message

Stephen Boyd Sept. 25, 2019, 5:41 a.m. UTC
I don't see any users of icc_get() in the kernel today, and adding them
doesn't make sense. That's because adding calls to that function in a
driver will make the driver SoC specific given that the arguments are
some sort of source and destination numbers that would typically be
listed in DT or come from platform data so they can match a global
numberspace of interconnect numbers. It would be better to follow the
approach of other kernel frameworks where the API is the same no matter
how the platform is described (i.e. platform data, DT, ACPI, etc.) and
swizzle the result in the framework to match whatever the device is by
checking for a DT node pointer or a fwnode pointer, etc. Therefore,
install icc_get() as the defacto API and make drivers use that instead
of of_icc_get() which implies the driver is DT specific when it doesn't
need to be.

The DT binding could also be simplified somewhat. Currently a path needs
to be specified in DT for each and every use case that is possible for a
device to want. Typically the path is to memory, which looks to be
reserved for in the binding with the "dma-mem" named path, but sometimes
the path is from a device to the CPU or more generically from a device
to another device which could be a CPU, cache, DMA master, or another
device if some sort of DMA to DMA scenario is happening. Let's remove
the pair part of the binding so that we just list out a device's
possible endpoints on the bus or busses that it's connected to.

If the kernel wants to figure out what the path is to memory or the CPU
or a cache or something else it should be able to do that by finding the
node for the "destination" endpoint, extracting that node's
"interconnects" property, and deriving the path in software. For
example, we shouldn't need to write out each use case path by path in DT
for each endpoint node that wants to set a bandwidth to memory. We
should just be able to indicate what endpoint(s) a device sits on based
on the interconnect provider in the system and then walk the various
interconnects to find the path from that source endpoint to the
destination endpoint.

Obviously this patch doesn't compile but I'm sending it out to start
this discussion so we don't get stuck on the binding or the kernel APIs
for a long time. It looks like we should be OK in terms of backwards
compatibility because we can just ignore the second element in an old
binding, but maybe we'll want to describe paths in different directions
(e.g. the path from the CPU to the SD controller may be different than
the path the SD controller takes to the CPU) and that may require
extending interconnect-names to indicate what direction/sort of path it
is. I'm basically thinking about master vs. slave ports in AXI land.

Cc: Maxime Ripard <mripard@kernel.org>
Cc: <linux-pm@vger.kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: <devicetree@vger.kernel.org>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Evan Green <evgreen@chromium.org>
Cc: David Dai <daidavid1@codeaurora.org>
Signed-off-by: Stephen Boyd <swboyd@chromium.org>
---
 .../bindings/interconnect/interconnect.txt    | 19 ++++---------------
 include/linux/interconnect.h                  | 13 ++-----------
 2 files changed, 6 insertions(+), 26 deletions(-)


base-commit: b5b3bd898ba99fb0fb6aed3b23ec6353a1724d6f

Comments

Bjorn Andersson Sept. 25, 2019, 5:59 a.m. UTC | #1
On Tue 24 Sep 22:41 PDT 2019, Stephen Boyd wrote:

> I don't see any users of icc_get() in the kernel today, and adding them
> doesn't make sense. That's because adding calls to that function in a
> driver will make the driver SoC specific given that the arguments are
> some sort of source and destination numbers that would typically be
> listed in DT or come from platform data so they can match a global
> numberspace of interconnect numbers. It would be better to follow the
> approach of other kernel frameworks where the API is the same no matter
> how the platform is described (i.e. platform data, DT, ACPI, etc.) and
> swizzle the result in the framework to match whatever the device is by
> checking for a DT node pointer or a fwnode pointer, etc. Therefore,
> install icc_get() as the defacto API and make drivers use that instead
> of of_icc_get() which implies the driver is DT specific when it doesn't
> need to be.
> 

+1 on this part!

> The DT binding could also be simplified somewhat. Currently a path needs
> to be specified in DT for each and every use case that is possible for a
> device to want. Typically the path is to memory, which looks to be
> reserved for in the binding with the "dma-mem" named path, but sometimes
> the path is from a device to the CPU or more generically from a device
> to another device which could be a CPU, cache, DMA master, or another
> device if some sort of DMA to DMA scenario is happening. Let's remove
> the pair part of the binding so that we just list out a device's
> possible endpoints on the bus or busses that it's connected to.
> 
> If the kernel wants to figure out what the path is to memory or the CPU
> or a cache or something else it should be able to do that by finding the
> node for the "destination" endpoint, extracting that node's
> "interconnects" property, and deriving the path in software. For
> example, we shouldn't need to write out each use case path by path in DT
> for each endpoint node that wants to set a bandwidth to memory. We
> should just be able to indicate what endpoint(s) a device sits on based
> on the interconnect provider in the system and then walk the various
> interconnects to find the path from that source endpoint to the
> destination endpoint.
> 

But doesn't this implies that the other end of the path is always some
specific node, e.g. DDR? With a single node how would you describe
CPU->LLCC or GPU->OCIMEM?

> Obviously this patch doesn't compile but I'm sending it out to start
> this discussion so we don't get stuck on the binding or the kernel APIs
> for a long time. It looks like we should be OK in terms of backwards
> compatibility because we can just ignore the second element in an old
> binding, but maybe we'll want to describe paths in different directions
> (e.g. the path from the CPU to the SD controller may be different than
> the path the SD controller takes to the CPU) and that may require
> extending interconnect-names to indicate what direction/sort of path it
> is. I'm basically thinking about master vs. slave ports in AXI land.
> 
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: <linux-pm@vger.kernel.org>
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: <devicetree@vger.kernel.org>
> Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
> Cc: Evan Green <evgreen@chromium.org>
> Cc: David Dai <daidavid1@codeaurora.org>
> Signed-off-by: Stephen Boyd <swboyd@chromium.org>
> ---
>  .../bindings/interconnect/interconnect.txt    | 19 ++++---------------
>  include/linux/interconnect.h                  | 13 ++-----------
>  2 files changed, 6 insertions(+), 26 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> index 6f5d23a605b7..f8979186b8a7 100644
> --- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -11,7 +11,7 @@ The interconnect provider binding is intended to represent the interconnect
>  controllers in the system. Each provider registers a set of interconnect
>  nodes, which expose the interconnect related capabilities of the interconnect
>  to consumer drivers. These capabilities can be throughput, latency, priority
> -etc. The consumer drivers set constraints on interconnect path (or endpoints)
> +etc. The consumer drivers set constraints on interconnect paths (or endpoints)
>  depending on the use case. Interconnect providers can also be interconnect
>  consumers, such as in the case where two network-on-chip fabrics interface
>  directly.
> @@ -42,23 +42,12 @@ multiple paths from different providers depending on use case and the
>  components it has to interact with.
>  
>  Required properties:
> -interconnects : Pairs of phandles and interconnect provider specifier to denote
> -	        the edge source and destination ports of the interconnect path.
> -
> -Optional properties:
> -interconnect-names : List of interconnect path name strings sorted in the same
> -		     order as the interconnects property. Consumers drivers will use
> -		     interconnect-names to match interconnect paths with interconnect
> -		     specifier pairs.
> -
> -                     Reserved interconnect names:
> -			 * dma-mem: Path from the device to the main memory of
> -			            the system
> +interconnects : phandle and interconnect provider specifier to denote
> +	        the edge source for this node.
>  
>  Example:
>  
>  	sdhci@7864000 {
>  		...
> -		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
> -		interconnect-names = "sdhc-mem";
> +		interconnects = <&pnoc MASTER_SDCC_1>;

This example seems incomplete, as it doesn't describe the path between
CPU and the config space, with this in place I think you need the
interconnect-names.


But with a single interconnect, the interconnect-names should be
omitted, as done in other frameworks.

>  	};
> diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
> index d70a914cba11..e1ae704f5ab1 100644
> --- a/include/linux/interconnect.h
> +++ b/include/linux/interconnect.h
> @@ -25,23 +25,14 @@ struct device;
>  
>  #if IS_ENABLED(CONFIG_INTERCONNECT)
>  
> -struct icc_path *icc_get(struct device *dev, const int src_id,
> -			 const int dst_id);
> -struct icc_path *of_icc_get(struct device *dev, const char *name);
> +struct icc_path *icc_get(struct device *dev, const char *name);
>  void icc_put(struct icc_path *path);
>  int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>  void icc_set_tag(struct icc_path *path, u32 tag);
>  
>  #else
>  
> -static inline struct icc_path *icc_get(struct device *dev, const int src_id,
> -				       const int dst_id)
> -{
> -	return NULL;
> -}
> -
> -static inline struct icc_path *of_icc_get(struct device *dev,
> -					  const char *name)
> +static inline struct icc_path *icc_get(struct device *dev, const char *name)

I like this part, if mimics what's done in other frameworks and removes
the ties to OF from the API.

Regards,
Bjorn

>  {
>  	return NULL;
>  }
> 
> base-commit: b5b3bd898ba99fb0fb6aed3b23ec6353a1724d6f
> -- 
> Sent by a computer through tubes
>
Maxime Ripard Sept. 25, 2019, 9:15 a.m. UTC | #2
Hi Stephen,

On Tue, Sep 24, 2019 at 10:41:33PM -0700, Stephen Boyd wrote:
> The DT binding could also be simplified somewhat. Currently a path needs
> to be specified in DT for each and every use case that is possible for a
> device to want. Typically the path is to memory, which looks to be
> reserved for in the binding with the "dma-mem" named path, but sometimes
> the path is from a device to the CPU or more generically from a device
> to another device which could be a CPU, cache, DMA master, or another
> device if some sort of DMA to DMA scenario is happening. Let's remove
> the pair part of the binding so that we just list out a device's
> possible endpoints on the bus or busses that it's connected to.
>
> If the kernel wants to figure out what the path is to memory or the CPU
> or a cache or something else it should be able to do that by finding the
> node for the "destination" endpoint, extracting that node's
> "interconnects" property, and deriving the path in software. For
> example, we shouldn't need to write out each use case path by path in DT
> for each endpoint node that wants to set a bandwidth to memory. We
> should just be able to indicate what endpoint(s) a device sits on based
> on the interconnect provider in the system and then walk the various
> interconnects to find the path from that source endpoint to the
> destination endpoint.

The dma-mem name is used by the OF core to adjust the mapping of the
devices as well. So, any solution needs to be generic (or provide a
generic helper).

Maxime
Stephen Boyd Sept. 25, 2019, 1:28 p.m. UTC | #3
Quoting Bjorn Andersson (2019-09-24 22:59:33)
> On Tue 24 Sep 22:41 PDT 2019, Stephen Boyd wrote:
> 
> > The DT binding could also be simplified somewhat. Currently a path needs
> > to be specified in DT for each and every use case that is possible for a
> > device to want. Typically the path is to memory, which looks to be
> > reserved for in the binding with the "dma-mem" named path, but sometimes
> > the path is from a device to the CPU or more generically from a device
> > to another device which could be a CPU, cache, DMA master, or another
> > device if some sort of DMA to DMA scenario is happening. Let's remove
> > the pair part of the binding so that we just list out a device's
> > possible endpoints on the bus or busses that it's connected to.
> > 
> > If the kernel wants to figure out what the path is to memory or the CPU
> > or a cache or something else it should be able to do that by finding the
> > node for the "destination" endpoint, extracting that node's
> > "interconnects" property, and deriving the path in software. For
> > example, we shouldn't need to write out each use case path by path in DT
> > for each endpoint node that wants to set a bandwidth to memory. We
> > should just be able to indicate what endpoint(s) a device sits on based
> > on the interconnect provider in the system and then walk the various
> > interconnects to find the path from that source endpoint to the
> > destination endpoint.
> > 
> 
> But doesn't this implies that the other end of the path is always some
> specific node, e.g. DDR? With a single node how would you describe
> CPU->LLCC or GPU->OCIMEM?

By only specifying the endpoint the device uses it describes what the
hardware block interfaces with. It doesn't imply that there's only one
other end of the path. It implies that the paths should be discoverable
by walking the interconnect graph given some source device node and
target device node. In most cases the target device node will be a DDR
controller node, but sometimes it could be LLCC or OCIMEM. We may need
to add some sort of "get the DDR controller device" API or work it into
the interconnect API somehow to indicate what target endpoint is
desired. By not listing all those paths in DT we gain flexibility to add
more paths later on without having to update or tweak DT to describe
more paths/routes through the interconnect.

> 
> > Obviously this patch doesn't compile but I'm sending it out to start
> > this discussion so we don't get stuck on the binding or the kernel APIs
> > for a long time. It looks like we should be OK in terms of backwards
> > compatibility because we can just ignore the second element in an old
> > binding, but maybe we'll want to describe paths in different directions
> > (e.g. the path from the CPU to the SD controller may be different than
> > the path the SD controller takes to the CPU) and that may require
> > extending interconnect-names to indicate what direction/sort of path it
> > is. I'm basically thinking about master vs. slave ports in AXI land.
> > 
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: <linux-pm@vger.kernel.org>
> > Cc: Rob Herring <robh+dt@kernel.org>
> > Cc: <devicetree@vger.kernel.org>
> > Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
> > Cc: Evan Green <evgreen@chromium.org>
> > Cc: David Dai <daidavid1@codeaurora.org>
> > Signed-off-by: Stephen Boyd <swboyd@chromium.org>
> > ---
> >  .../bindings/interconnect/interconnect.txt    | 19 ++++---------------
> >  include/linux/interconnect.h                  | 13 ++-----------
> >  2 files changed, 6 insertions(+), 26 deletions(-)
> > 
> > diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> > index 6f5d23a605b7..f8979186b8a7 100644
> > --- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
> > +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> > @@ -11,7 +11,7 @@ The interconnect provider binding is intended to represent the interconnect
> >  controllers in the system. Each provider registers a set of interconnect
> >  nodes, which expose the interconnect related capabilities of the interconnect
> >  to consumer drivers. These capabilities can be throughput, latency, priority
> > -etc. The consumer drivers set constraints on interconnect path (or endpoints)
> > +etc. The consumer drivers set constraints on interconnect paths (or endpoints)
> >  depending on the use case. Interconnect providers can also be interconnect
> >  consumers, such as in the case where two network-on-chip fabrics interface
> >  directly.
> > @@ -42,23 +42,12 @@ multiple paths from different providers depending on use case and the
> >  components it has to interact with.
> >  
> >  Required properties:
> > -interconnects : Pairs of phandles and interconnect provider specifier to denote
> > -             the edge source and destination ports of the interconnect path.
> > -
> > -Optional properties:
> > -interconnect-names : List of interconnect path name strings sorted in the same
> > -                  order as the interconnects property. Consumers drivers will use
> > -                  interconnect-names to match interconnect paths with interconnect
> > -                  specifier pairs.
> > -
> > -                     Reserved interconnect names:
> > -                      * dma-mem: Path from the device to the main memory of
> > -                                 the system
> > +interconnects : phandle and interconnect provider specifier to denote
> > +             the edge source for this node.
> >  
> >  Example:
> >  
> >       sdhci@7864000 {
> >               ...
> > -             interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
> > -             interconnect-names = "sdhc-mem";
> > +             interconnects = <&pnoc MASTER_SDCC_1>;
> 
> This example seems incomplete, as it doesn't describe the path between
> CPU and the config space, with this in place I think you need the
> interconnect-names.
> 
> 
> But with a single interconnect, the interconnect-names should be
> omitted, as done in other frameworks.
> 

Sure, no names makes sense when it's just one path.
David Dai Sept. 27, 2019, 5:16 p.m. UTC | #4
On 9/25/2019 6:28 AM, Stephen Boyd wrote:
> Quoting Bjorn Andersson (2019-09-24 22:59:33)
>> On Tue 24 Sep 22:41 PDT 2019, Stephen Boyd wrote:
>>
>>> The DT binding could also be simplified somewhat. Currently a path needs
>>> to be specified in DT for each and every use case that is possible for a
>>> device to want. Typically the path is to memory, which looks to be
>>> reserved for in the binding with the "dma-mem" named path, but sometimes
>>> the path is from a device to the CPU or more generically from a device
>>> to another device which could be a CPU, cache, DMA master, or another
>>> device if some sort of DMA to DMA scenario is happening. Let's remove
>>> the pair part of the binding so that we just list out a device's
>>> possible endpoints on the bus or busses that it's connected to.
>>>
>>> If the kernel wants to figure out what the path is to memory or the CPU
>>> or a cache or something else it should be able to do that by finding the
>>> node for the "destination" endpoint, extracting that node's
>>> "interconnects" property, and deriving the path in software. For
>>> example, we shouldn't need to write out each use case path by path in DT
>>> for each endpoint node that wants to set a bandwidth to memory. We
>>> should just be able to indicate what endpoint(s) a device sits on based
>>> on the interconnect provider in the system and then walk the various
>>> interconnects to find the path from that source endpoint to the
>>> destination endpoint.
>>>
>> But doesn't this implies that the other end of the path is always some
>> specific node, e.g. DDR? With a single node how would you describe
>> CPU->LLCC or GPU->OCIMEM?
> By only specifying the endpoint the device uses it describes what the
> hardware block interfaces with. It doesn't imply that there's only one
> other end of the path. It implies that the paths should be discoverable
> by walking the interconnect graph given some source device node and
> target device node. In most cases the target device node will be a DDR
> controller node, but sometimes it could be LLCC or OCIMEM. We may need
> to add some sort of "get the DDR controller device" API or work it into
> the interconnect API somehow to indicate what target endpoint is
> desired. By not listing all those paths in DT we gain flexibility to add
> more paths later on without having to update or tweak DT to describe
> more paths/routes through the interconnect.


I'm unsure that using the target device node or target source device is 
the correct way to represent the constraints that the consumers apply on 
the interconnects. While it's true the traffic is intended for the 
targeted devices, the constraints(QoS or BW) are for the interconnect or 
specifically the paths that span across the ports of various 
interconnects(NoC devices in this case). I think having both src and dst 
properties is still the simplest way to achieve the flexibility that we 
require to set the constraints for ports(that may not have a target 
device defined in DT or exists as some intermediate port across multiple 
interconnects).

>>> Obviously this patch doesn't compile but I'm sending it out to start
>>> this discussion so we don't get stuck on the binding or the kernel APIs
>>> for a long time. It looks like we should be OK in terms of backwards
>>> compatibility because we can just ignore the second element in an old
>>> binding, but maybe we'll want to describe paths in different directions
>>> (e.g. the path from the CPU to the SD controller may be different than
>>> the path the SD controller takes to the CPU) and that may require
>>> extending interconnect-names to indicate what direction/sort of path it
>>> is. I'm basically thinking about master vs. slave ports in AXI land.
>>>
>>> Cc: Maxime Ripard <mripard@kernel.org>
>>> Cc: <linux-pm@vger.kernel.org>
>>> Cc: Rob Herring <robh+dt@kernel.org>
>>> Cc: <devicetree@vger.kernel.org>
>>> Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
>>> Cc: Evan Green <evgreen@chromium.org>
>>> Cc: David Dai <daidavid1@codeaurora.org>
>>> Signed-off-by: Stephen Boyd <swboyd@chromium.org>
>>> ---
>>>   .../bindings/interconnect/interconnect.txt    | 19 ++++---------------
>>>   include/linux/interconnect.h                  | 13 ++-----------
>>>   2 files changed, 6 insertions(+), 26 deletions(-)
>>>
>>> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>> index 6f5d23a605b7..f8979186b8a7 100644
>>> --- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>> @@ -11,7 +11,7 @@ The interconnect provider binding is intended to represent the interconnect
>>>   controllers in the system. Each provider registers a set of interconnect
>>>   nodes, which expose the interconnect related capabilities of the interconnect
>>>   to consumer drivers. These capabilities can be throughput, latency, priority
>>> -etc. The consumer drivers set constraints on interconnect path (or endpoints)
>>> +etc. The consumer drivers set constraints on interconnect paths (or endpoints)
>>>   depending on the use case. Interconnect providers can also be interconnect
>>>   consumers, such as in the case where two network-on-chip fabrics interface
>>>   directly.
>>> @@ -42,23 +42,12 @@ multiple paths from different providers depending on use case and the
>>>   components it has to interact with.
>>>   
>>>   Required properties:
>>> -interconnects : Pairs of phandles and interconnect provider specifier to denote
>>> -             the edge source and destination ports of the interconnect path.
>>> -
>>> -Optional properties:
>>> -interconnect-names : List of interconnect path name strings sorted in the same
>>> -                  order as the interconnects property. Consumers drivers will use
>>> -                  interconnect-names to match interconnect paths with interconnect
>>> -                  specifier pairs.
>>> -
>>> -                     Reserved interconnect names:
>>> -                      * dma-mem: Path from the device to the main memory of
>>> -                                 the system
>>> +interconnects : phandle and interconnect provider specifier to denote
>>> +             the edge source for this node.
>>>   
>>>   Example:
>>>   
>>>        sdhci@7864000 {
>>>                ...
>>> -             interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
>>> -             interconnect-names = "sdhc-mem";
>>> +             interconnects = <&pnoc MASTER_SDCC_1>;
>> This example seems incomplete, as it doesn't describe the path between
>> CPU and the config space, with this in place I think you need the
>> interconnect-names.
>>
>>
>> But with a single interconnect, the interconnect-names should be
>> omitted, as done in other frameworks.
>>
> Sure, no names makes sense when it's just one path.
>
Stephen Boyd Oct. 4, 2019, 6:14 p.m. UTC | #5
Quoting David Dai (2019-09-27 10:16:07)
> 
> On 9/25/2019 6:28 AM, Stephen Boyd wrote:
> > Quoting Bjorn Andersson (2019-09-24 22:59:33)
> >> On Tue 24 Sep 22:41 PDT 2019, Stephen Boyd wrote:
> >>
> >>> The DT binding could also be simplified somewhat. Currently a path needs
> >>> to be specified in DT for each and every use case that is possible for a
> >>> device to want. Typically the path is to memory, which looks to be
> >>> reserved for in the binding with the "dma-mem" named path, but sometimes
> >>> the path is from a device to the CPU or more generically from a device
> >>> to another device which could be a CPU, cache, DMA master, or another
> >>> device if some sort of DMA to DMA scenario is happening. Let's remove
> >>> the pair part of the binding so that we just list out a device's
> >>> possible endpoints on the bus or busses that it's connected to.
> >>>
> >>> If the kernel wants to figure out what the path is to memory or the CPU
> >>> or a cache or something else it should be able to do that by finding the
> >>> node for the "destination" endpoint, extracting that node's
> >>> "interconnects" property, and deriving the path in software. For
> >>> example, we shouldn't need to write out each use case path by path in DT
> >>> for each endpoint node that wants to set a bandwidth to memory. We
> >>> should just be able to indicate what endpoint(s) a device sits on based
> >>> on the interconnect provider in the system and then walk the various
> >>> interconnects to find the path from that source endpoint to the
> >>> destination endpoint.
> >>>
> >> But doesn't this implies that the other end of the path is always some
> >> specific node, e.g. DDR? With a single node how would you describe
> >> CPU->LLCC or GPU->OCIMEM?
> > By only specifying the endpoint the device uses it describes what the
> > hardware block interfaces with. It doesn't imply that there's only one
> > other end of the path. It implies that the paths should be discoverable
> > by walking the interconnect graph given some source device node and
> > target device node. In most cases the target device node will be a DDR
> > controller node, but sometimes it could be LLCC or OCIMEM. We may need
> > to add some sort of "get the DDR controller device" API or work it into
> > the interconnect API somehow to indicate what target endpoint is
> > desired. By not listing all those paths in DT we gain flexibility to add
> > more paths later on without having to update or tweak DT to describe
> > more paths/routes through the interconnect.
> 
> 
> I'm unsure that using the target device node or target source device is 
> the correct way to represent the constraints that the consumers apply on 
> the interconnects. While it's true the traffic is intended for the 
> targeted devices, the constraints(QoS or BW) are for the interconnect or 
> specifically the paths that span across the ports of various 
> interconnects(NoC devices in this case). I think having both src and dst 
> properties is still the simplest way to achieve the flexibility that we 
> require to set the constraints for ports(that may not have a target 
> device defined in DT or exists as some intermediate port across multiple 
> interconnects).
> 

The need for paths described in DT may make sense for certain cases but
that seems to be the minority. My guess is that maybe an OPP binding
would need to describe the path to apply the bandwidth to. Otherwise I
don't see what the need is for. Maybe you can list out more scenarios?

Either way, the binding has been designed to cover all the possibilities
by just saying that we have to describe at least two points for an
'interconnect'. It is a path based binding. I'd rather see us have an
endpoint based binding with the option to fallback to paths if we need
to constrain something. Maybe this can be a new property that is used
the majority of the time?

 gpu@f00 {
   interconnect-endpoints = <&icc GPU_SLAVE_PORT>, <&icc GPU_MASTER_PORT0>, <&icc GPU_MASTER_PORT1>;
   interconnect-endpoint-names = "slave", "master0", "master1";
 };

(Or we can invert it and make interconnect-paths be non-standard)

The property would describe what's going to this device and how it's
integrated into the SoC. This is similar to how we describe what port is
connected to a device with the of graph binding or how we only list the
clk or regulator that goes to a device and not the whole path to the
root of the respective tree.

There can be a driver API that gets these port numbers out and
constructs a path to another struct device or struct device_node. I
imagine that 90% of the time a driver is going to request some bandwidth
from their master port (or ports) to the DDR controller. We could either
make the DDR controller a device that can be globally acquired or
integrate it deeply into the API to the point that it looks for a DDR
controller somewhere or relies on interconnect providers to tell the
framework about the controller.

TL;DR is that I don't want to have to specify paths in each and every
node to say that some port on this device here is connected to some port
on the DDR controller and that we want to adjust the bandwidth or QoS
across this path. I'd like to describe a device "hermetically" by
listing out the ports the device has. Then we can rely on the OS to
figure out what paths to construct and change. If we need to constrain
or tweak those paths then we can do that with the existing interconnects
binding, but let's worry about that when we get there.
Saravana Kannan Oct. 9, 2019, 12:11 a.m. UTC | #6
Quoting Stephen Boyd:
> Quoting David Dai (2019-09-27 10:16:07)
> > On 9/25/2019 6:28 AM, Stephen Boyd wrote:
> > > Quoting Bjorn Andersson (2019-09-24 22:59:33)
> > >> On Tue 24 Sep 22:41 PDT 2019, Stephen Boyd wrote:
> > >>
> > >>> The DT binding could also be simplified somewhat. Currently a path needs
> > >>> to be specified in DT for each and every use case that is possible for a
> > >>> device to want. Typically the path is to memory, which looks to be
> > >>> reserved for in the binding with the "dma-mem" named path, but sometimes
> > >>> the path is from a device to the CPU or more generically from a device
> > >>> to another device which could be a CPU, cache, DMA master, or another
> > >>> device if some sort of DMA to DMA scenario is happening. Let's remove
> > >>> the pair part of the binding so that we just list out a device's
> > >>> possible endpoints on the bus or busses that it's connected to.
> > >>>
> > >>> If the kernel wants to figure out what the path is to memory or the CPU
> > >>> or a cache or something else it should be able to do that by finding the
> > >>> node for the "destination" endpoint, extracting that node's
> > >>> "interconnects" property, and deriving the path in software. For
> > >>> example, we shouldn't need to write out each use case path by path in DT
> > >>> for each endpoint node that wants to set a bandwidth to memory. We
> > >>> should just be able to indicate what endpoint(s) a device sits on based
> > >>> on the interconnect provider in the system and then walk the various
> > >>> interconnects to find the path from that source endpoint to the
> > >>> destination endpoint.
> > >>>
> > >> But doesn't this implies that the other end of the path is always some
> > >> specific node, e.g. DDR? With a single node how would you describe
> > >> CPU->LLCC or GPU->OCIMEM?
> > > By only specifying the endpoint the device uses it describes what the
> > > hardware block interfaces with. It doesn't imply that there's only one
> > > other end of the path. It implies that the paths should be discoverable
> > > by walking the interconnect graph given some source device node and
> > > target device node. In most cases the target device node will be a DDR
> > > controller node, but sometimes it could be LLCC or OCIMEM. We may need
> > > to add some sort of "get the DDR controller device" API or work it into
> > > the interconnect API somehow to indicate what target endpoint is
> > > desired. By not listing all those paths in DT we gain flexibility to add
> > > more paths later on without having to update or tweak DT to describe
> > > more paths/routes through the interconnect.
> >
> >
> > I'm unsure that using the target device node or target source device is
> > the correct way to represent the constraints that the consumers apply on
> > the interconnects. While it's true the traffic is intended for the
> > targeted devices, the constraints(QoS or BW) are for the interconnect or 
> > specifically the paths that span across the ports of various
> > interconnects(NoC devices in this case). I think having both src and dst 
> > properties is still the simplest way to achieve the flexibility that we
> > require to set the constraints for ports(that may not have a target
> > device defined in DT or exists as some intermediate port across multiple 
> > interconnects).
> 
> 
> The need for paths described in DT may make sense for certain cases but
> that seems to be the minority. My guess is that maybe an OPP binding
> would need to describe the path to apply the bandwidth to. Otherwise I
> don't see what the need is for. Maybe you can list out more scenarios?
> 
> Either way, the binding has been designed to cover all the possibilities
> by just saying that we have to describe at least two points for an
> 'interconnect'. It is a path based binding. I'd rather see us have an
> endpoint based binding with the option to fallback to paths if we need
> to constrain something. Maybe this can be a new property that is used
> the majority of the time?
> 
> gpu@f00 {
>   interconnect-endpoints =3D <&icc GPU_SLAVE_PORT>, <&icc GPU_MASTER_PORT0>, <&icc GPU_MASTER_PORT1>;
>   interconnect-endpoint-names =3D "slave", "master0", "master1";
> };
> 
> (Or we can invert it and make interconnect-paths be non-standard)
> 
> The property would describe what's going to this device and how it's
> integrated into the SoC. This is similar to how we describe what port is
> connected to a device with the of graph binding or how we only list the
> clk or regulator that goes to a device and not the whole path to the
> root of the respective tree.
> 
> There can be a driver API that gets these port numbers out and
> constructs a path to another struct device or struct device_node. I
> imagine that 90% of the time a driver is going to request some bandwidth
> from their master port (or ports) to the DDR controller. We could either
> make the DDR controller a device that can be globally acquired or
> integrate it deeply into the API to the point that it looks for a DDR
> controller somewhere or relies on interconnect providers to tell the
> framework about the controller.
> 
> TL;DR is that I don't want to have to specify paths in each and every
> node to say that some port on this device here is connected to some port
> on the DDR controller and that we want to adjust the bandwidth or QoS
> across this path. I'd like to describe a device "hermetically" by
> listing out the ports the device has. Then we can rely on the OS to
> figure out what paths to construct and change. If we need to constrain
> or tweak those paths then we can do that with the existing interconnects
> binding, but let's worry about that when we get there.

I think I understand what you are trying to do here. Correct me if my
understanding is wrong. Each device just lists what interconnects (and their
ports) it's connected to -- let's call this device endpoints.

If a device is making bandwidth votes from itself to some other device, it just
specifies the other end point (as a device? path name?) in icc_get(). The
interconnect framework can then figure out what two interconnect ports the
icc_get() is about (by looking at the device endpoints info) and then construct
the path.

But it's not clear how you'll handle the case where a Device-A wants to make a
bandwidth vote from a Device-B to Device-C. This is necessary for multiple
scenarios. Eg: booting a remote proc where the CPU needs to make sure the
remote proc has its path to DDR active. icc_get() can't always assume the
source is the device making the request. So, I don't think you can omit the
source of the path in the DT binding.

If we take the above into account, would the only change in your proposal be to
list the source and destination device in DT instead of their interconnect and
ports?  I don't have a strong opinion on whether this is necessary, but want to
make sure that we are all talking about the same thing.

Another way to look at this: There's one crucial difference between clocks and
interconnects. Given a clock controller and it's "output port", the clock that
you referring to doesn't change irrespective of what device is asking for it.
But in the case of an interconnect, if you specify just a destination
interconnect and it's port, the path that you are referring to changes based on
which device is requesting it. And if you want a device independent of
referring to a path, you need to specify the source and destination explicitly.

Also, if a firmware isn't used, how do you see your icc_get() proposal working
with just the "name"? In what way is it better than the current icc_get() API?

Thanks,
Saravana
P.S: Sorry about any messy quoting/indentation. I'm hand editing the quoting
from the raw email text and using git to send a response.
Stephen Boyd Oct. 16, 2019, 8:23 p.m. UTC | #7
Quoting Saravana Kannan (2019-10-08 17:11:59)
> Quoting Stephen Boyd:
> > The property would describe what's going to this device and how it's
> > integrated into the SoC. This is similar to how we describe what port is
> > connected to a device with the of graph binding or how we only list the
> > clk or regulator that goes to a device and not the whole path to the
> > root of the respective tree.
> > 
> > There can be a driver API that gets these port numbers out and
> > constructs a path to another struct device or struct device_node. I
> > imagine that 90% of the time a driver is going to request some bandwidth
> > from their master port (or ports) to the DDR controller. We could either
> > make the DDR controller a device that can be globally acquired or
> > integrate it deeply into the API to the point that it looks for a DDR
> > controller somewhere or relies on interconnect providers to tell the
> > framework about the controller.
> > 
> > TL;DR is that I don't want to have to specify paths in each and every
> > node to say that some port on this device here is connected to some port
> > on the DDR controller and that we want to adjust the bandwidth or QoS
> > across this path. I'd like to describe a device "hermetically" by
> > listing out the ports the device has. Then we can rely on the OS to
> > figure out what paths to construct and change. If we need to constrain
> > or tweak those paths then we can do that with the existing interconnects
> > binding, but let's worry about that when we get there.
> 
> I think I understand what you are trying to do here. Correct me if my
> understanding is wrong. Each device just lists what interconnects (and their
> ports) it's connected to -- let's call this device endpoints.

Yes this is the ideal case.

> 
> If a device is making bandwidth votes from itself to some other device, it just
> specifies the other end point (as a device? path name?) in icc_get(). The
> interconnect framework can then figure out what two interconnect ports the
> icc_get() is about (by looking at the device endpoints info) and then construct
> the path.
> 
> But it's not clear how you'll handle the case where a Device-A wants to make a
> bandwidth vote from a Device-B to Device-C. This is necessary for multiple
> scenarios. Eg: booting a remote proc where the CPU needs to make sure the
> remote proc has its path to DDR active. icc_get() can't always assume the
> source is the device making the request. So, I don't think you can omit the
> source of the path in the DT binding.

This is one scenario, not various scenarios. For a remote proc why
wouldn't we list the endpoint for the remote processor in the remote
proc node in DT? That makes sense to me so I'm not following this
scenario.

> 
> If we take the above into account, would the only change in your proposal be to
> list the source and destination device in DT instead of their interconnect and
> ports?  I don't have a strong opinion on whether this is necessary, but want to
> make sure that we are all talking about the same thing.

What does this mean? I would assume that if device A is using another
device B, either that would be expressed in DT via a phandle property or
userspace would be connecting the two devices up with each other with
something like dma_buf so the driver would know the other side of the
path they want to scale bandwidth on.

> 
> Another way to look at this: There's one crucial difference between clocks and
> interconnects. Given a clock controller and it's "output port", the clock that
> you referring to doesn't change irrespective of what device is asking for it.
> But in the case of an interconnect, if you specify just a destination
> interconnect and it's port, the path that you are referring to changes based on
> which device is requesting it. And if you want a device independent of
> referring to a path, you need to specify the source and destination explicitly.

Yes but those are use-cases that don't need to be expressed in DT. We
should be able to get by with just listing the endpoints and then build
the layer in the kernel to get the other side of the endpoint if it's
something like DDR or another device that we want to connect the
endpoint to.

> 
> Also, if a firmware isn't used, how do you see your icc_get() proposal working
> with just the "name"? In what way is it better than the current icc_get() API?

If there isn't firmware and we're using platform data then I imagine we
would have to have data tables listing out the endpoints of various
devices, endpoint names, and the device names associated with those
endpoints so we could match them up in the framework.
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
index 6f5d23a605b7..f8979186b8a7 100644
--- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -11,7 +11,7 @@  The interconnect provider binding is intended to represent the interconnect
 controllers in the system. Each provider registers a set of interconnect
 nodes, which expose the interconnect related capabilities of the interconnect
 to consumer drivers. These capabilities can be throughput, latency, priority
-etc. The consumer drivers set constraints on interconnect path (or endpoints)
+etc. The consumer drivers set constraints on interconnect paths (or endpoints)
 depending on the use case. Interconnect providers can also be interconnect
 consumers, such as in the case where two network-on-chip fabrics interface
 directly.
@@ -42,23 +42,12 @@  multiple paths from different providers depending on use case and the
 components it has to interact with.
 
 Required properties:
-interconnects : Pairs of phandles and interconnect provider specifier to denote
-	        the edge source and destination ports of the interconnect path.
-
-Optional properties:
-interconnect-names : List of interconnect path name strings sorted in the same
-		     order as the interconnects property. Consumers drivers will use
-		     interconnect-names to match interconnect paths with interconnect
-		     specifier pairs.
-
-                     Reserved interconnect names:
-			 * dma-mem: Path from the device to the main memory of
-			            the system
+interconnects : phandle and interconnect provider specifier to denote
+	        the edge source for this node.
 
 Example:
 
 	sdhci@7864000 {
 		...
-		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
-		interconnect-names = "sdhc-mem";
+		interconnects = <&pnoc MASTER_SDCC_1>;
 	};
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
index d70a914cba11..e1ae704f5ab1 100644
--- a/include/linux/interconnect.h
+++ b/include/linux/interconnect.h
@@ -25,23 +25,14 @@  struct device;
 
 #if IS_ENABLED(CONFIG_INTERCONNECT)
 
-struct icc_path *icc_get(struct device *dev, const int src_id,
-			 const int dst_id);
-struct icc_path *of_icc_get(struct device *dev, const char *name);
+struct icc_path *icc_get(struct device *dev, const char *name);
 void icc_put(struct icc_path *path);
 int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
 void icc_set_tag(struct icc_path *path, u32 tag);
 
 #else
 
-static inline struct icc_path *icc_get(struct device *dev, const int src_id,
-				       const int dst_id)
-{
-	return NULL;
-}
-
-static inline struct icc_path *of_icc_get(struct device *dev,
-					  const char *name)
+static inline struct icc_path *icc_get(struct device *dev, const char *name)
 {
 	return NULL;
 }