diff mbox

[RFC] cpufreq: Add bindings for CPU clock sharing topology

Message ID CAKohpo=MmoALOHg-=7cf0jm=OJO57TWQZQXqkRwhRwU-DGPMmA@mail.gmail.com
State Superseded, archived
Headers show

Commit Message

Viresh Kumar July 18, 2014, 6:40 a.m. UTC
On 18 July 2014 11:47, Olof Johansson <olof@lixom.net> wrote:
> Why complicate it by using two properties?
>
> If there is no property, then the CPUs are assumed to be controlled
> independently.
>
> if there is a clock-master = <phandle> property, then that points at
> the cpu that is the main one controlling clock for the group.
>
> There's really no need to label the master -- it will be the only one
> with incoming links but nothing outgoing. And a master without slaves
> is an independently controlled cpu so you should be fine in that
> aspect too.

I thought so earlier, but then I remembered something I read long back.
Don't remember which thread now, but I *might* be wrong..

"Bindings are like APIs and new bindings shouldn't break existing stuff.."

And:

> If there is no property, then the CPUs are assumed to be controlled
> independently.

seems to break the existing API.

But if that isn't the case, the bindings are very simple & clear to handle.
Diff for new bindings:

 Examples:
 1.) All CPUs share a single clock line
@@ -30,34 +29,6 @@ cpus {
                compatible = "arm,cortex-a15";
                reg = <0>;
                next-level-cache = <&L2>;
-               clock-master;
-               operating-points = <
-                       /* kHz    uV */
-                       792000  1100000
-                       396000  950000
-                       198000  850000
-               >;
-               clock-latency = <61036>; /* two CLK32 periods */
-       };
-
-       cpu1: cpu@1 {
-               compatible = "arm,cortex-a15";
-               reg = <1>;
-               next-level-cache = <&L2>;
-               clock-ganged = <&cpu0>;
-       };
-};
-
-OR (clock-master/ganged aren't defined)
-
-cpus {
-       #address-cells = <1>;
-       #size-cells = <0>;
-
-       cpu0: cpu@0 {
-               compatible = "arm,cortex-a15";
-               reg = <0>;
-               next-level-cache = <&L2>;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
@@ -71,6 +42,7 @@ cpus {
                compatible = "arm,cortex-a15";
                reg = <1>;
                next-level-cache = <&L2>;
+               clock-master = <&cpu0>;
        };
 };

@@ -83,7 +55,6 @@ cpus {
                compatible = "arm,cortex-a15";
                reg = <0>;
                next-level-cache = <&L2>;
-               clock-master;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
@@ -97,7 +68,6 @@ cpus {
                compatible = "arm,cortex-a15";
                reg = <1>;
                next-level-cache = <&L2>;
-               clock-master;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
@@ -119,7 +89,6 @@ cpus {
                compatible = "arm,cortex-a15";
                reg = <0>;
                next-level-cache = <&L2>;
-               clock-master;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
@@ -133,14 +102,13 @@ cpus {
                compatible = "arm,cortex-a15";
                reg = <1>;
                next-level-cache = <&L2>;
-               clock-ganged = <&cpu0>;
+               clock-master = <&cpu0>;
        };

        cpu2: cpu@100 {
                compatible = "arm,cortex-a7";
                reg = <100>;
                next-level-cache = <&L2>;
-               clock-master;
                operating-points = <
                        /* kHz    uV */
                        792000  950000
@@ -154,6 +122,6 @@ cpus {
                compatible = "arm,cortex-a7";
                reg = <101>;
                next-level-cache = <&L2>;
-               clock-ganged = <&cpu2>;
+               clock-master = <&cpu2>;
        };
 };
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Olof Johansson July 18, 2014, 9:52 p.m. UTC | #1
On Thu, Jul 17, 2014 at 11:40 PM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> On 18 July 2014 11:47, Olof Johansson <olof@lixom.net> wrote:
>> Why complicate it by using two properties?
>>
>> If there is no property, then the CPUs are assumed to be controlled
>> independently.
>>
>> if there is a clock-master = <phandle> property, then that points at
>> the cpu that is the main one controlling clock for the group.
>>
>> There's really no need to label the master -- it will be the only one
>> with incoming links but nothing outgoing. And a master without slaves
>> is an independently controlled cpu so you should be fine in that
>> aspect too.
>
> I thought so earlier, but then I remembered something I read long back.
> Don't remember which thread now, but I *might* be wrong..
>
> "Bindings are like APIs and new bindings shouldn't break existing stuff.."
>
> And:
>
>> If there is no property, then the CPUs are assumed to be controlled
>> independently.
>
> seems to break the existing API.

What is the current API that is being broken, in your opinion?

> But if that isn't the case, the bindings are very simple & clear to handle.
> Diff for new bindings:

It's somewhat confusing to see a diff to the patch instead of a new
version. It seems to remove the cpu 0 entry now?


-Olof
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Viresh Kumar July 19, 2014, 2:46 p.m. UTC | #2
On 19 July 2014 03:22, Olof Johansson <olof@lixom.net> wrote:
> What is the current API that is being broken, in your opinion?

So, currently the nodes doesn't have any such property. And drivers
consider all of them as sharing clocks, for eg: cpufreq-cpu0.

Now, if we use those older DT's after the new changes, drivers would
consider CPUs as having separate clocks. And that would be opposite
of what currently happens.

Not sure if this counts as broken.

>> But if that isn't the case, the bindings are very simple & clear to handle.
>> Diff for new bindings:
>
> It's somewhat confusing to see a diff to the patch instead of a new
> version. It seems to remove the cpu 0 entry now?

Not really, I removed an unwanted example. This is how it looks:



* Generic CPUFreq clock bindings

Clock lines may or may not be shared among different CPUs on a platform.

Possible configurations:
1.) All CPUs share a single clock line
2.) All CPUs have independent clock lines
3.) CPUs within a group/cluster share clock line but each group/cluster have a
    separate line for itself

Optional Properties:
- clock-master: Contains phandle of the master cpu controlling clocks.

  Ideally there is nothing like a "master" CPU as any CPU can play with DVFS
  settings. But we have to choose one cpu out of a group, so that others can
  point to it.

  If there is no "clock-master" property for a cpu node, it is considered as
  master. It may or may not have other slave CPUs pointing towards it.

Examples:
1.) All CPUs share a single clock line

cpus {
        #address-cells = <1>;
        #size-cells = <0>;

        cpu0: cpu@0 {
                compatible = "arm,cortex-a15";
                reg = <0>;
                next-level-cache = <&L2>;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
                        396000  950000
                        198000  850000
                >;
                clock-latency = <61036>; /* two CLK32 periods */
        };

        cpu1: cpu@1 {
                compatible = "arm,cortex-a15";
                reg = <1>;
                next-level-cache = <&L2>;
                clock-master = <&cpu0>;
        };
};

2.) All CPUs have independent clock lines
cpus {
        #address-cells = <1>;
        #size-cells = <0>;

        cpu0: cpu@0 {
                compatible = "arm,cortex-a15";
                reg = <0>;
                next-level-cache = <&L2>;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
                        396000  950000
                        198000  850000
                >;
                clock-latency = <61036>; /* two CLK32 periods */
        };

        cpu1: cpu@1 {
                compatible = "arm,cortex-a15";
                reg = <1>;
                next-level-cache = <&L2>;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
                        396000  950000
                        198000  850000
                >;
                clock-latency = <61036>; /* two CLK32 periods */
        };
};

3.) CPUs within a group/cluster share single clock line but each group/cluster
have a separate line for itself

cpus {
        #address-cells = <1>;
        #size-cells = <0>;

        cpu0: cpu@0 {
                compatible = "arm,cortex-a15";
                reg = <0>;
                next-level-cache = <&L2>;
                operating-points = <
                        /* kHz    uV */
                        792000  1100000
                        396000  950000
                        198000  850000
                >;
                clock-latency = <61036>; /* two CLK32 periods */
        };

        cpu1: cpu@1 {
                compatible = "arm,cortex-a15";
                reg = <1>;
                next-level-cache = <&L2>;
                clock-master = <&cpu0>;
        };

        cpu2: cpu@100 {
                compatible = "arm,cortex-a7";
                reg = <100>;
                next-level-cache = <&L2>;
                operating-points = <
                        /* kHz    uV */
                        792000  950000
                        396000  750000
                        198000  450000
                >;
                clock-latency = <61036>; /* two CLK32 periods */
        };

        cpu3: cpu@101 {
                compatible = "arm,cortex-a7";
                reg = <101>;
                next-level-cache = <&L2>;
                clock-master = <&cpu2>;
        };
};
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Santosh Shilimkar July 19, 2014, 3:24 p.m. UTC | #3
Viresh,

On Saturday 19 July 2014 10:46 AM, Viresh Kumar wrote:
> On 19 July 2014 03:22, Olof Johansson <olof@lixom.net> wrote:
>> What is the current API that is being broken, in your opinion?
> 
> So, currently the nodes doesn't have any such property. And drivers
> consider all of them as sharing clocks, for eg: cpufreq-cpu0.
> 
> Now, if we use those older DT's after the new changes, drivers would
> consider CPUs as having separate clocks. And that would be opposite
> of what currently happens.
> 
> Not sure if this counts as broken.
> 
>>> But if that isn't the case, the bindings are very simple & clear to handle.
>>> Diff for new bindings:
>>
>> It's somewhat confusing to see a diff to the patch instead of a new
>> version. It seems to remove the cpu 0 entry now?
> 
> Not really, I removed an unwanted example. This is how it looks:
> 
> 
> 
> * Generic CPUFreq clock bindings
> 
> Clock lines may or may not be shared among different CPUs on a platform.
> 
> Possible configurations:
> 1.) All CPUs share a single clock line
> 2.) All CPUs have independent clock lines
> 3.) CPUs within a group/cluster share clock line but each group/cluster have a
>     separate line for itself
> 
> Optional Properties:
> - clock-master: Contains phandle of the master cpu controlling clocks.
> 
>   Ideally there is nothing like a "master" CPU as any CPU can play with DVFS
>   settings. But we have to choose one cpu out of a group, so that others can
>   point to it.
> 
>   If there is no "clock-master" property for a cpu node, it is considered as
>   master. It may or may not have other slave CPUs pointing towards it.
> 

Sorry for jumping late, but one of the point I was raising as part of your
other series was to extend the CPU topology bindings to cover the voltage
domain information which is probably what is really needed to let the
CPUfreq extract the information. Not sure if it was already discussed.

After all the CPU clocks, cluster, clock-gating, power domains are pretty much
related. So instead of having new binding for CPUFreq, I was wondering whether
we can extend the CPU topology binding information to include missing information.
Scheduler work anyway needs that information.

Ref: Documentation/devicetree/bindings/arm/topology.txt

Does that make sense ?

> Examples:
> 1.) All CPUs share a single clock line
> 
> cpus {
>         #address-cells = <1>;
>         #size-cells = <0>;
> 
>         cpu0: cpu@0 {
>                 compatible = "arm,cortex-a15";
>                 reg = <0>;
>                 next-level-cache = <&L2>;
>                 operating-points = <
>                         /* kHz    uV */
>                         792000  1100000
>                         396000  950000
>                         198000  850000
>                 >;
>                 clock-latency = <61036>; /* two CLK32 periods */
>         };
> 
>         cpu1: cpu@1 {
>                 compatible = "arm,cortex-a15";
>                 reg = <1>;
>                 next-level-cache = <&L2>;
>                 clock-master = <&cpu0>;
>         };
> };
> 
> 2.) All CPUs have independent clock lines
> cpus {
>         #address-cells = <1>;
>         #size-cells = <0>;
> 
>         cpu0: cpu@0 {
>                 compatible = "arm,cortex-a15";
>                 reg = <0>;
>                 next-level-cache = <&L2>;
>                 operating-points = <
>                         /* kHz    uV */
>                         792000  1100000
>                         396000  950000
>                         198000  850000
>                 >;
>                 clock-latency = <61036>; /* two CLK32 periods */
>         };
> 
>         cpu1: cpu@1 {
>                 compatible = "arm,cortex-a15";
>                 reg = <1>;
>                 next-level-cache = <&L2>;
>                 operating-points = <
>                         /* kHz    uV */
>                         792000  1100000
>                         396000  950000
>                         198000  850000
>                 >;
>                 clock-latency = <61036>; /* two CLK32 periods */
>         };
> };
> 
> 3.) CPUs within a group/cluster share single clock line but each group/cluster
> have a separate line for itself
> 
> cpus {
>         #address-cells = <1>;
>         #size-cells = <0>;
> 
>         cpu0: cpu@0 {
>                 compatible = "arm,cortex-a15";
>                 reg = <0>;
>                 next-level-cache = <&L2>;
>                 operating-points = <
>                         /* kHz    uV */
>                         792000  1100000
>                         396000  950000
>                         198000  850000
>                 >;
>                 clock-latency = <61036>; /* two CLK32 periods */
>         };
> 
>         cpu1: cpu@1 {
>                 compatible = "arm,cortex-a15";
>                 reg = <1>;
>                 next-level-cache = <&L2>;
>                 clock-master = <&cpu0>;
>         };
> 
>         cpu2: cpu@100 {
>                 compatible = "arm,cortex-a7";
>                 reg = <100>;
>                 next-level-cache = <&L2>;
>                 operating-points = <
>                         /* kHz    uV */
>                         792000  950000
>                         396000  750000
>                         198000  450000
>                 >;
>                 clock-latency = <61036>; /* two CLK32 periods */
>         };
> 
>         cpu3: cpu@101 {
>                 compatible = "arm,cortex-a7";
>                 reg = <101>;
>                 next-level-cache = <&L2>;
>                 clock-master = <&cpu2>;
>         };
> };
> 

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Viresh Kumar July 20, 2014, 12:07 p.m. UTC | #4
On 19 July 2014 20:54, Santosh Shilimkar <santosh.shilimkar@ti.com> wrote:
> Sorry for jumping late

No, you aren't late. Its just 2 days old thread :)

> but one of the point I was raising as part of your
> other series was to extend the CPU topology bindings to cover the voltage
> domain information which is probably what is really needed to let the
> CPUfreq extract the information. Not sure if it was already discussed.

Not it wasn't.

> After all the CPU clocks, cluster, clock-gating, power domains are pretty much
> related. So instead of having new binding for CPUFreq, I was wondering whether
> we can extend the CPU topology binding information to include missing information.
> Scheduler work anyway needs that information.
>
> Ref: Documentation/devicetree/bindings/arm/topology.txt
>
> Does that make sense ?

Yeah it does, but I am not sure what exactly the bindings should look then.
So, the most basic step could be moving the new bindings to topology.txt
and name clock-master to dvfs-master.

What else?

If its going to be much controversial then we *can* go for just dvfs bindings
for now and then update them later.

Doesn't make sense? :)
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Santosh Shilimkar July 21, 2014, 1:40 p.m. UTC | #5
On Sunday 20 July 2014 08:07 AM, Viresh Kumar wrote:
> On 19 July 2014 20:54, Santosh Shilimkar <santosh.shilimkar@ti.com> wrote:
>> Sorry for jumping late
> 
> No, you aren't late. Its just 2 days old thread :)
> 
>> but one of the point I was raising as part of your
>> other series was to extend the CPU topology bindings to cover the voltage
>> domain information which is probably what is really needed to let the
>> CPUfreq extract the information. Not sure if it was already discussed.
> 
> Not it wasn't.
> 
>> After all the CPU clocks, cluster, clock-gating, power domains are pretty much
>> related. So instead of having new binding for CPUFreq, I was wondering whether
>> we can extend the CPU topology binding information to include missing information.
>> Scheduler work anyway needs that information.
>>
>> Ref: Documentation/devicetree/bindings/arm/topology.txt
>>
>> Does that make sense ?
> 
> Yeah it does, but I am not sure what exactly the bindings should look then.
> So, the most basic step could be moving the new bindings to topology.txt
> and name clock-master to dvfs-master.
> 
> What else?
> 
> If its going to be much controversial then we *can* go for just dvfs bindings
> for now and then update them later.
> 
Would be good to get others opinion. As you said if it is controversial then
it will stall the development.

Regards,
Santosh

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rob Herring July 21, 2014, 5 p.m. UTC | #6
On Sat, Jul 19, 2014 at 10:24 AM, Santosh Shilimkar
<santosh.shilimkar@ti.com> wrote:
> Viresh,
>
> On Saturday 19 July 2014 10:46 AM, Viresh Kumar wrote:
>> On 19 July 2014 03:22, Olof Johansson <olof@lixom.net> wrote:
>>> What is the current API that is being broken, in your opinion?
>>
>> So, currently the nodes doesn't have any such property. And drivers
>> consider all of them as sharing clocks, for eg: cpufreq-cpu0.
>>
>> Now, if we use those older DT's after the new changes, drivers would
>> consider CPUs as having separate clocks. And that would be opposite
>> of what currently happens.
>>
>> Not sure if this counts as broken.
>>
>>>> But if that isn't the case, the bindings are very simple & clear to handle.
>>>> Diff for new bindings:
>>>
>>> It's somewhat confusing to see a diff to the patch instead of a new
>>> version. It seems to remove the cpu 0 entry now?
>>
>> Not really, I removed an unwanted example. This is how it looks:
>>
>>
>>
>> * Generic CPUFreq clock bindings
>>
>> Clock lines may or may not be shared among different CPUs on a platform.
>>
>> Possible configurations:
>> 1.) All CPUs share a single clock line
>> 2.) All CPUs have independent clock lines
>> 3.) CPUs within a group/cluster share clock line but each group/cluster have a
>>     separate line for itself
>>
>> Optional Properties:
>> - clock-master: Contains phandle of the master cpu controlling clocks.
>>
>>   Ideally there is nothing like a "master" CPU as any CPU can play with DVFS
>>   settings. But we have to choose one cpu out of a group, so that others can
>>   point to it.
>>
>>   If there is no "clock-master" property for a cpu node, it is considered as
>>   master. It may or may not have other slave CPUs pointing towards it.
>>
>
> Sorry for jumping late, but one of the point I was raising as part of your
> other series was to extend the CPU topology bindings to cover the voltage
> domain information which is probably what is really needed to let the
> CPUfreq extract the information. Not sure if it was already discussed.
>
> After all the CPU clocks, cluster, clock-gating, power domains are pretty much
> related. So instead of having new binding for CPUFreq, I was wondering whether
> we can extend the CPU topology binding information to include missing information.
> Scheduler work anyway needs that information.
>
> Ref: Documentation/devicetree/bindings/arm/topology.txt
>
> Does that make sense ?

To me, but every time I suggest adding things to the topology the ARM
folks object... I really think we should have built the topology into
the /cpus hierarchy. Then we could add properties at the correct place
in the hierarchy where they are common.

I don't really like the proposal here. It just doesn't look like a
clean description of the h/w.

Ignoring compatibility, I would like to see something like
operating-points and/or the clock properties be moved up to /cpus if
they are shared and be per cpu node when they are not. This of course
does not work if you have independent OPPs for each cluster with a
shared clock within cluster.

The operating-points binding has obvious shortcomings and this is
another example. Someone needs to step up with a new binding that
addresses this and all other issues (e.g. turbo modes, extra per OPP
data, etc.). I don't really want to see halfway fixes to the binding
that ignore the other issues (unless there is a really simple
solution).

Rob
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Viresh Kumar July 23, 2014, 4:55 a.m. UTC | #7
On 21 July 2014 22:30, Rob Herring <rob.herring@linaro.org> wrote:

> To me, but every time I suggest adding things to the topology the ARM
> folks object... I really think we should have built the topology into
> the /cpus hierarchy. Then we could add properties at the correct place
> in the hierarchy where they are common.
>
> I don't really like the proposal here. It just doesn't look like a
> clean description of the h/w.

I knew it wouldn't be easy to get these bindings correctly in the first
attempt atleast, but I still went for it so that this thread can progress:

https://lkml.org/lkml/2014/7/18/3

There are changes waiting to get some kind of intermediate solution
in, so that other platforms can reuse cpufreq-cpu0 driver. Also, so that
cpufreq-cpu0 can be converted to cpufreq-generic, so that it can support
almost any platform.

Can you please reply to that thread to suggest some intermediate
solution that we can get fixed in 3.17? Mvebu and Krait are waiting for
these patches to get in..

The solution can be simple enough as right now we have only two kinds
of platforms:
- All CPUs share clock
- All CPUs have separate clocks

The most simple solution of that could have been a Kconfig entry, but that
would be a problem for multi-arch builds if these platforms do want to get
build with multi-arch config. Otherwise we have to get some binding in
place for 3.17..

Please see if you can get that thread going :)

> Ignoring compatibility, I would like to see something like
> operating-points and/or the clock properties be moved up to /cpus if
> they are shared and be per cpu node when they are not. This of course
> does not work if you have independent OPPs for each cluster with a
> shared clock within cluster.

Yeah, that had always been the issue. People probably tried to get a
cluster {} block as well to simplify that but there were some objections
to it as well, though not sure what happened to that stuff..
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mike Turquette July 24, 2014, 12:33 a.m. UTC | #8
Quoting Viresh Kumar (2014-07-20 05:07:32)
> On 19 July 2014 20:54, Santosh Shilimkar <santosh.shilimkar@ti.com> wrote:
> > Sorry for jumping late
> 
> No, you aren't late. Its just 2 days old thread :)
> 
> > but one of the point I was raising as part of your
> > other series was to extend the CPU topology bindings to cover the voltage
> > domain information which is probably what is really needed to let the
> > CPUfreq extract the information. Not sure if it was already discussed.
> 
> Not it wasn't.
> 
> > After all the CPU clocks, cluster, clock-gating, power domains are pretty much
> > related. So instead of having new binding for CPUFreq, I was wondering whether
> > we can extend the CPU topology binding information to include missing information.
> > Scheduler work anyway needs that information.
> >
> > Ref: Documentation/devicetree/bindings/arm/topology.txt
> >
> > Does that make sense ?
> 
> Yeah it does, but I am not sure what exactly the bindings should look then.
> So, the most basic step could be moving the new bindings to topology.txt
> and name clock-master to dvfs-master.
> 
> What else?

If we're going to model the hardware then the binding should not use the
CPU phandles in "clock-master" or "dvfs-master". The correct thing to
model for a given CPU is which clock consumes. It's not accurate to say
that one CPU is the "master", at least not in this context.

A previous approach tried to compare struct clk pointers, which is a bad
idea since those are just cookies and should not be deref'd by drivers.
However a similar approach would be to compare the phandle, right?

Regards,
Mike

> 
> If its going to be much controversial then we *can* go for just dvfs bindings
> for now and then update them later.
> 
> Doesn't make sense? :)
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rob Herring July 24, 2014, 2:24 a.m. UTC | #9
On Wed, Jul 23, 2014 at 7:33 PM, Mike Turquette
<mike.turquette@linaro.org> wrote:
> Quoting Viresh Kumar (2014-07-20 05:07:32)
>> On 19 July 2014 20:54, Santosh Shilimkar <santosh.shilimkar@ti.com> wrote:
>> > Sorry for jumping late
>>
>> No, you aren't late. Its just 2 days old thread :)
>>
>> > but one of the point I was raising as part of your
>> > other series was to extend the CPU topology bindings to cover the voltage
>> > domain information which is probably what is really needed to let the
>> > CPUfreq extract the information. Not sure if it was already discussed.
>>
>> Not it wasn't.
>>
>> > After all the CPU clocks, cluster, clock-gating, power domains are pretty much
>> > related. So instead of having new binding for CPUFreq, I was wondering whether
>> > we can extend the CPU topology binding information to include missing information.
>> > Scheduler work anyway needs that information.
>> >
>> > Ref: Documentation/devicetree/bindings/arm/topology.txt
>> >
>> > Does that make sense ?
>>
>> Yeah it does, but I am not sure what exactly the bindings should look then.
>> So, the most basic step could be moving the new bindings to topology.txt
>> and name clock-master to dvfs-master.
>>
>> What else?
>
> If we're going to model the hardware then the binding should not use the
> CPU phandles in "clock-master" or "dvfs-master". The correct thing to
> model for a given CPU is which clock consumes. It's not accurate to say
> that one CPU is the "master", at least not in this context.
>
> A previous approach tried to compare struct clk pointers, which is a bad
> idea since those are just cookies and should not be deref'd by drivers.
> However a similar approach would be to compare the phandle, right?

I think there needs to be a way to query whether a rate change for a
clock affects other children. As pointed out previously, the clock to
a core may not be shared, but it's parent that can change rates could
be shared. This could be done with functions
clk_get_parent_rate_change to return the clock in heirarchy which can
change rates, and clk_is_parent_clk which tells if one clock is a
child of another clock. It's been a while since I've looked at the
clock api. It could also be done by experiment. Change the rate for
core 0 and see if core 1's rate is changed and still equal. There's
probably some ordering issue with doing that though.

Rob
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Viresh Kumar July 24, 2014, 10:39 a.m. UTC | #10
On 24 July 2014 07:54, Rob Herring <rob.herring@linaro.org> wrote:

>> A previous approach tried to compare struct clk pointers, which is a bad
>> idea since those are just cookies and should not be deref'd by drivers.
>> However a similar approach would be to compare the phandle, right?

Yeah. So what's the right way then?

> I think there needs to be a way to query whether a rate change for a
> clock affects other children. As pointed out previously, the clock to
> a core may not be shared, but it's parent that can change rates could
> be shared. This could be done with functions
> clk_get_parent_rate_change to return the clock in heirarchy which can
> change rates, and clk_is_parent_clk which tells if one clock is a
> child of another clock. It's been a while since I've looked at the
> clock api. It could also be done by experiment. Change the rate for
> core 0 and see if core 1's rate is changed and still equal. There's
> probably some ordering issue with doing that though.

But Mike sort of Nak'd that as well earlier :)
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mike Turquette July 25, 2014, 8:02 p.m. UTC | #11
Quoting Viresh Kumar (2014-07-24 03:39:31)
> On 24 July 2014 07:54, Rob Herring <rob.herring@linaro.org> wrote:
> 
> >> A previous approach tried to compare struct clk pointers, which is a bad
> >> idea since those are just cookies and should not be deref'd by drivers.
> >> However a similar approach would be to compare the phandle, right?
> 
> Yeah. So what's the right way then?
> 
> > I think there needs to be a way to query whether a rate change for a
> > clock affects other children. As pointed out previously, the clock to
> > a core may not be shared, but it's parent that can change rates could
> > be shared. This could be done with functions
> > clk_get_parent_rate_change to return the clock in heirarchy which can
> > change rates, and clk_is_parent_clk which tells if one clock is a
> > child of another clock. It's been a while since I've looked at the
> > clock api. It could also be done by experiment. Change the rate for
> > core 0 and see if core 1's rate is changed and still equal. There's
> > probably some ordering issue with doing that though.
> 
> But Mike sort of Nak'd that as well earlier :)

Sorry for being NAKy. Pushing more query functions into the clk api will
be a bad thing in the long term. First there are races: if a driver
queries the clock framework and then calls another clk.h api function
based on that result, we would need to lock the whole block of code
since the query may be invalid immediately after the result is returned.

Secondly, it is common for hardware to be designed with various use case
configurations already known. From a software point of view this is
"compile-time" info since we already know what OPPs we want to run at,
how we want to multiplex our clock outputs, etc. Often we know exactly
what we want to do from a hardware integration standpoint, and by trying
to have some algorithmic approach in software where we walk the tree and
figure stuff out dynamically is just going to make life hard for us.

I also think this whole push to consolidate every cpufreq machine driver
into cpufreq-cpu0 is complete madness. Machine drivers exist for a
reason. Now with this big push to remove cpufreq drivers we are just
pushing more and more logic into the clock framework (see some recent
patch series introducing "cpu clocks" as clock providers, which
essentially do what the old cpufreq machine drivers used to do).

So my opinion is to figure out how to specify the shared-clock versus
independent-clock parameter it DT. I think the big issue here is the
topology semantics around the cpus node, and I'll stay out of that
stuff. But let's please not introduce a random API for a single merge
window and let's also not over-consolidate machine driver code just for
the sake of having fewer C files.

Regards,
Mike
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Viresh Kumar Aug. 25, 2014, 7:05 a.m. UTC | #12
On 26 July 2014 01:32, Mike Turquette <mike.turquette@linaro.org> wrote:
> So my opinion is to figure out how to specify the shared-clock versus
> independent-clock parameter it DT. I think the big issue here is the
> topology semantics around the cpus node, and I'll stay out of that
> stuff. But let's please not introduce a random API for a single merge
> window and let's also not over-consolidate machine driver code just for
> the sake of having fewer C files.

Returning almost after a month to this :(

- I agree that consolidation to cpufreq-cpu0 driver is good but adding a
backend clk driver for that is bad. I would look around supporting callbacks
to cpufreq-cpu0 driver. So that ->target()/->target_index() can be specified
by platform drivers and rest of the code can be used. But that's the next
problem to solve.

- Back to the first issue. How do we sharing information from DT ?

As I read them (and I may be wrong in understanding that), there were
conflicting ideas from many..

Can we please decide how we want to see these bindings? So, that I can
implement them quickly and close this thread...

Its just hanging in the middle as there wasn't a single clean solution yet.

--
viresh
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/Documentation/devicetree/bindings/cpufreq/cpu_clocks.txt
b/Documentation/devicetree/bindings/cpufreq/cpu_clocks.txt
index 30ce9ad..d127bba 100644
--- a/Documentation/devicetree/bindings/cpufreq/cpu_clocks.txt
+++ b/Documentation/devicetree/bindings/cpufreq/cpu_clocks.txt
@@ -9,15 +9,14 @@  Clock lines may or may not be shared among different
CPUs on a platform.
     separate line for itself

 Optional Properties:
-- clock-master: This declares cpu as clock master. Other CPUs can either define
-  "clock-ganged" or "clock-master" property, but shouldn't be missing both.
+- clock-master: Contains phandle of the master cpu controlling clocks.

-- clock-ganged: Should have phandle of a cpu declared as "clock-master".
-
-If any cpu node, doesn't have both "clock-master" and "clock-ganged" properties
-defined, it would be assumed that all CPUs on that platform share a
single clock
-line. This will help supporting already upstreamed platforms.
+  Ideally there is nothing like a "master" CPU as any CPU can play with DVFS
+  settings. But we have to choose one cpu out of a group, so that others can
+  point to it.

+  If there is no "clock-master" property for a cpu node, it is considered as
+  master. It may or may not have other slave CPUs pointing towards it.