diff mbox series

[v18,4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

Message ID 20201005160614.3749-5-ben.levinsky@xilinx.com
State Superseded, archived
Headers show
Series Provide basic driver to control Arm R5 co-processor found on Xilinx ZynqMP | expand

Checks

Context Check Description
robh/checkpatch success
robh/dt-meta-schema success

Commit Message

Ben Levinsky Oct. 5, 2020, 4:06 p.m. UTC
Add binding for ZynqMP R5 OpenAMP.

Represent the RPU domain resources in one device node. Each RPU
processor is a subnode of the top RPU domain node.

Signed-off-by: Jason Wu <j.wu@xilinx.com>
Signed-off-by: Wendy Liang <jliang@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Ben Levinsky <ben.levinsky@xilinx.com>
---
v3:
- update zynqmp_r5 yaml parsing to not raise warnings for extra
  information in children of R5 node. The warning "node has a unit
  name, but no reg or ranges property" will still be raised though 
  as this particular node is needed to describe the
  '#address-cells' and '#size-cells' information.
v4::
- remove warning '/example-0/rpu@ff9a0000/r5@0: 
  node has a unit name, but no reg or ranges property'
  by adding reg to r5 node.
v5:
- update device tree sample and yaml parsing to not raise any warnings
- description for memory-region in yaml parsing
- compatible string in yaml parsing for TCM
v6:
- remove coupling TCM nodes with remoteproc 
- remove mailbox as it is optional not needed
v7:
- change lockstep-mode to xlnx,cluster-mode
v9:
- show example IPC nodes and tcm bank nodes
v11:
- add property meta-memory-regions to illustrate link
  between r5 and TCM banks
- update so no warnings from 'make dt_binding_check'
v14:
- concerns were raised about the new property meta-memory-regions.
  There is no clear direction so for the moment I kept it in the series
- place IPC nodes in RAM in the reserved memory section
v15:
- change lockstep-mode prop as follows: if present, then RPU cluster is in
  lockstep mode. if not present, cluster is in split mode.
v17:
- remove compatible string from tcm bank nodes
- fix style for bindings
- add boolean type to lockstep mode in binding
- add/update descriptions memory-region, meta-memory-regions,
  pnode-id, mbox* properties
v18: 
- update example remoteproc zynqmp r5 compat string, remove version
  number
---
 .../xilinx,zynqmp-r5-remoteproc.yaml          | 142 ++++++++++++++++++
 1 file changed, 142 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml

Comments

Linus Walleij Oct. 8, 2020, 12:37 p.m. UTC | #1
Hi Ben,

thanks for your patch! I noticed this today  and pay some interest
because in the past I used with implementing the support for
TCM memory on ARM32.

On Mon, Oct 5, 2020 at 6:06 PM Ben Levinsky <ben.levinsky@xilinx.com> wrote:

> Add binding for ZynqMP R5 OpenAMP.
>
> Represent the RPU domain resources in one device node. Each RPU
> processor is a subnode of the top RPU domain node.
>
> Signed-off-by: Jason Wu <j.wu@xilinx.com>
> Signed-off-by: Wendy Liang <jliang@xilinx.com>
> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
> Signed-off-by: Ben Levinsky <ben.levinsky@xilinx.com>
(...)

> +title: Xilinx R5 remote processor controller bindings
> +
> +description:
> +  This document defines the binding for the remoteproc component that loads and
> +  boots firmwares on the Xilinx Zynqmp and Versal family chipset.

... firmwares for the on-board Cortex R5 of the Zynqmp .. (etc)

> +
> +  Note that the Linux has global addressing view of the R5-related memory (TCM)
> +  so the absolute address ranges are provided in TCM reg's.

Please do not refer to Linux in bindings, they are also for other
operating systems.

Isn't that spelled out "Tightly Coupled Memory" (please expand the acronym).

I had a hard time to parse this description, do you mean:

"The Tightly Coupled Memory (an on-chip SRAM) used by the Cortex R5
is double-ported and visible in both the physical memory space of the
Cortex A5 and the memory space of the main ZynqMP processor
cluster. This is visible in the address space of the ZynqMP processor
at the address indicated here."

That would make sense, but please confirm/update.

> +  memory-region:
> +    description:
> +      collection of memory carveouts used for elf-loading and inter-processor
> +      communication. each carveout in this case should be in DDR, not
> +      chip-specific memory. In Xilinx case, this is TCM, OCM, BRAM, etc.
> +    $ref: /schemas/types.yaml#/definitions/phandle-array

This is nice, you're reusing the infrastructure we already
have for these carveouts, good design!

> +  meta-memory-regions:
> +    description:
> +      collection of memories that are not present in the top level memory
> +      nodes' mapping. For example, R5s' TCM banks. These banks are needed
> +      for R5 firmware meta data such as the R5 firmware's heap and stack.
> +      To be more precise, this is on-chip reserved SRAM regions, e.g. TCM,
> +      BRAM, OCM, etc.
> +    $ref: /schemas/types.yaml#/definitions/phandle-array

Is this in the memory space of the main CPU cluster?

It sure looks like that.

> +     /*
> +      * Below nodes are required if using TCM to load R5 firmware
> +      * if not, then either do not provide nodes are label as disabled in
> +      * status property
> +      */
> +     tcm0a: tcm_0a@ffe00000 {
> +         reg = <0xffe00000 0x10000>;
> +         pnode-id = <0xf>;
> +         no-map;
> +         status = "okay";
> +         phandle = <0x40>;
> +     };
> +     tcm0b: tcm_1a@ffe20000 {
> +         reg = <0xffe20000 0x10000>;
> +         pnode-id = <0x10>;
> +         no-map;
> +         status = "okay";
> +         phandle = <0x41>;
> +     };

All right so this looks suspicious to me. Please explain
what we are seeing in those reg entries? Is this the address
seen by the main CPU cluster?

Does it mean that the main CPU see the memory of the
R5 as "some kind of TCM" and that TCM is physically
mapped at 0xffe00000 (ITCM) and 0xffe20000 (DTCM)?

If the first is ITCM and the second DTCM that is pretty
important to point out, since this reflects the harvard
architecture properties of these two memory areas.

The phandle = thing I do not understand at all, but
maybe there is generic documentation for it that
I've missed?

Last time I checked (which was on the ARM32) the physical
address of the ITCM and DTCM could be changed at
runtime with CP15 instructions. I might be wrong
about this, but if that (or something similar) is still the case
you can't just say hardcode these addresses here, the
CPU can move that physical address somewhere else.
See the code in
arch/arm/kernel/tcm.c

It appears the ARM64 Linux kernel does not have any
TCM handling today, but that could change.

So is this just regular ARM TCM memory (as seen by
the main ARM64 cluster)?

If this is the case, you should probably add back the
compatible string, add a separate device tree binding
for TCM memories along the lines of
compatible = "arm,itcm";
compatible = "arm,dtcm";
The reg address should then ideally be interpreted by
the ARM64 kernel and assigned to the I/DTCM.

I'm paging Catalin on this because I do not know if
ARM64 really has [I|D]TCM or if this is some invention
of Xilinx's.

Yours,
Linus Walleij
Ben Levinsky Oct. 8, 2020, 2:21 p.m. UTC | #2
Hi Linus, 

Thanks for the review

Please see my comments inline

> 
> Hi Ben,
> 
> thanks for your patch! I noticed this today  and pay some interest
> because in the past I used with implementing the support for
> TCM memory on ARM32.
> 
> On Mon, Oct 5, 2020 at 6:06 PM Ben Levinsky <ben.levinsky@xilinx.com>
> wrote:
> 
> > Add binding for ZynqMP R5 OpenAMP.
> >
> > Represent the RPU domain resources in one device node. Each RPU
> > processor is a subnode of the top RPU domain node.
> >
> > Signed-off-by: Jason Wu <j.wu@xilinx.com>
> > Signed-off-by: Wendy Liang <jliang@xilinx.com>
> > Signed-off-by: Michal Simek <michal.simek@xilinx.com>
> > Signed-off-by: Ben Levinsky <ben.levinsky@xilinx.com>
> (...)
> 
> > +title: Xilinx R5 remote processor controller bindings
> > +
> > +description:
> > +  This document defines the binding for the remoteproc component that
> loads and
> > +  boots firmwares on the Xilinx Zynqmp and Versal family chipset.
> 
> ... firmwares for the on-board Cortex R5 of the Zynqmp .. (etc)
> 
Will fix
> > +
> > +  Note that the Linux has global addressing view of the R5-related memory
> (TCM)
> > +  so the absolute address ranges are provided in TCM reg's.
> 
> Please do not refer to Linux in bindings, they are also for other
> operating systems.
> 
> Isn't that spelled out "Tightly Coupled Memory" (please expand the acronym).
> 
> I had a hard time to parse this description, do you mean:
> 
> "The Tightly Coupled Memory (an on-chip SRAM) used by the Cortex R5
> is double-ported and visible in both the physical memory space of the
> Cortex A5 and the memory space of the main ZynqMP processor
> cluster. This is visible in the address space of the ZynqMP processor
> at the address indicated here."
> 
> That would make sense, but please confirm/update.
> 

Yes the TCM address space as noted in the TCM device tree nodes (e.g. 0xffe00000) is visible at that address on the A53 cluster.  Will fix

> > +  memory-region:
> > +    description:
> > +      collection of memory carveouts used for elf-loading and inter-processor
> > +      communication. each carveout in this case should be in DDR, not
> > +      chip-specific memory. In Xilinx case, this is TCM, OCM, BRAM, etc.
> > +    $ref: /schemas/types.yaml#/definitions/phandle-array
> 
> This is nice, you're reusing the infrastructure we already
> have for these carveouts, good design!
> 
> > +  meta-memory-regions:
> > +    description:
> > +      collection of memories that are not present in the top level memory
> > +      nodes' mapping. For example, R5s' TCM banks. These banks are needed
> > +      for R5 firmware meta data such as the R5 firmware's heap and stack.
> > +      To be more precise, this is on-chip reserved SRAM regions, e.g. TCM,
> > +      BRAM, OCM, etc.
> > +    $ref: /schemas/types.yaml#/definitions/phandle-array
> 
> Is this in the memory space of the main CPU cluster?
> 
> It sure looks like that.
> 

Yes this is in the memory space of the A53 cluster

Will fix to comment as such. Thank you

> > +     /*
> > +      * Below nodes are required if using TCM to load R5 firmware
> > +      * if not, then either do not provide nodes are label as disabled in
> > +      * status property
> > +      */
> > +     tcm0a: tcm_0a@ffe00000 {
> > +         reg = <0xffe00000 0x10000>;
> > +         pnode-id = <0xf>;
> > +         no-map;
> > +         status = "okay";
> > +         phandle = <0x40>;
> > +     };
> > +     tcm0b: tcm_1a@ffe20000 {
> > +         reg = <0xffe20000 0x10000>;
> > +         pnode-id = <0x10>;
> > +         no-map;
> > +         status = "okay";
> > +         phandle = <0x41>;
> > +     };
> 
> All right so this looks suspicious to me. Please explain
> what we are seeing in those reg entries? Is this the address
> seen by the main CPU cluster?
> 
> Does it mean that the main CPU see the memory of the
> R5 as "some kind of TCM" and that TCM is physically
> mapped at 0xffe00000 (ITCM) and 0xffe20000 (DTCM)?
> 
> If the first is ITCM and the second DTCM that is pretty
> important to point out, since this reflects the harvard
> architecture properties of these two memory areas.
> 
> The phandle = thing I do not understand at all, but
> maybe there is generic documentation for it that
> I've missed?
> 
> Last time I checked (which was on the ARM32) the physical
> address of the ITCM and DTCM could be changed at
> runtime with CP15 instructions. I might be wrong
> about this, but if that (or something similar) is still the case
> you can't just say hardcode these addresses here, the
> CPU can move that physical address somewhere else.
> See the code in
> arch/arm/kernel/tcm.c
> 
> It appears the ARM64 Linux kernel does not have any
> TCM handling today, but that could change.
> 
> So is this just regular ARM TCM memory (as seen by
> the main ARM64 cluster)?
> 
> If this is the case, you should probably add back the
> compatible string, add a separate device tree binding
> for TCM memories along the lines of
> compatible = "arm,itcm";
> compatible = "arm,dtcm";
> The reg address should then ideally be interpreted by
> the ARM64 kernel and assigned to the I/DTCM.
> 
> I'm paging Catalin on this because I do not know if
> ARM64 really has [I|D]TCM or if this is some invention
> of Xilinx's.
> 
> Yours,
> Linus Walleij

As you said, this is  just regular ARM TCM memory (as seen by the main ARM64 cluster). Yes I can add back the compatible string, though maybe just "tcm" or "xlnx,tcm" though there is also tcm compat string for the TI remoteproc driver later listed below

So I think I answered this above, but the APU (a53 cluster) sees the TCM banks (referred to on Zynq UltraScale+) at TCM banks 0A and 0B as 0xffe00000 and 0xffe20000 respectively and TCM banks 1A and 1B at 0xffe90000 and 0xffeb0000. Also it is similar to the TI k3 R5 Remoteproc driver binding for reference:
- https://patchwork.kernel.org/cover/11763783/ 

The phandle array here is to serve as a later for these nodes so that later on, in the Remoteproc R5 driver, these can be referenced to get some information via the pnode-id property.

The reason we have the compatible, reg, etc. is for System DT effort + @Stefano Stabellini 

That being said, we can change this around to couple the TCM bank nodes into the R5 as we have In our present, internal implementation at 
- https://github.com/Xilinx/linux-xlnx/blob/master/Documentation/devicetree/bindings/remoteproc/xilinx%2Czynqmp-r5-remoteproc.txt 
- https://github.com/Xilinx/linux-xlnx/blob/master/drivers/remoteproc/zynqmp_r5_remoteproc.c 
the TCM nodes are coupled in the R5 but after some previous review on this list, it was moved to have the TCM nodes decoupled from the R5 node

I am not sure what you mean on the Arm64 handling of TCM memory. Via the architecture of the SoC https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf I know that the A53 cluster can see the absolute addresses of the R5 cluster so the translation is *I think* done as you describe with the CP15 instructions you listed.
Ben Levinsky Oct. 8, 2020, 4:45 p.m. UTC | #3
Hi Sorry there were some typos and errors in my response so I'll correct them below:

> -----Original Message-----
> From: Ben Levinsky <BLEVINSK@xilinx.com>
> Sent: Thursday, October 8, 2020 7:21 AM
> To: Linus Walleij <linus.walleij@linaro.org>; Catalin Marinas
> <catalin.marinas@arm.com>; Stefano Stabellini <stefanos@xilinx.com>; Ed T.
> Mooring <emooring@xilinx.com>
> Cc: Ed T. Mooring <emooring@xilinx.com>; sunnyliangjy@gmail.com; Punit
> Agrawal <punit1.agrawal@toshiba.co.jp>; Michal Simek
> <michals@xilinx.com>; michael.auchter@ni.com; open list:OPEN FIRMWARE
> AND FLATTENED DEVICE TREE BINDINGS <devicetree@vger.kernel.org>;
> Mathieu Poirier <mathieu.poirier@linaro.org>; linux-
> remoteproc@vger.kernel.org; linux-kernel@vger.kernel.org; Rob Herring
> <robh+dt@kernel.org>; Linux ARM <linux-arm-kernel@lists.infradead.org>
> Subject: RE: [PATCH v18 4/5] dt-bindings: remoteproc: Add documentation for
> ZynqMP R5 rproc bindings
> 
> Hi Linus,
> 
> Thanks for the review
> 
> Please see my comments inline
> 
> >
> > Hi Ben,
> >
> > thanks for your patch! I noticed this today  and pay some interest
> > because in the past I used with implementing the support for
> > TCM memory on ARM32.
> >
> > On Mon, Oct 5, 2020 at 6:06 PM Ben Levinsky <ben.levinsky@xilinx.com>
> > wrote:
> >
> > > Add binding for ZynqMP R5 OpenAMP.
> > >
> > > Represent the RPU domain resources in one device node. Each RPU
> > > processor is a subnode of the top RPU domain node.
> > >
> > > Signed-off-by: Jason Wu <j.wu@xilinx.com>
> > > Signed-off-by: Wendy Liang <jliang@xilinx.com>
> > > Signed-off-by: Michal Simek <michal.simek@xilinx.com>
> > > Signed-off-by: Ben Levinsky <ben.levinsky@xilinx.com>
> > (...)
> >
> > > +title: Xilinx R5 remote processor controller bindings
> > > +
> > > +description:
> > > +  This document defines the binding for the remoteproc component that
> > loads and
> > > +  boots firmwares on the Xilinx Zynqmp and Versal family chipset.
> >
> > ... firmwares for the on-board Cortex R5 of the Zynqmp .. (etc)
> >
> Will fix
> > > +
> > > +  Note that the Linux has global addressing view of the R5-related
> memory
> > (TCM)
> > > +  so the absolute address ranges are provided in TCM reg's.
> >
> > Please do not refer to Linux in bindings, they are also for other
> > operating systems.
> >
> > Isn't that spelled out "Tightly Coupled Memory" (please expand the
> acronym).
> >
> > I had a hard time to parse this description, do you mean:
> >
> > "The Tightly Coupled Memory (an on-chip SRAM) used by the Cortex R5
> > is double-ported and visible in both the physical memory space of the
> > Cortex A5 and the memory space of the main ZynqMP processor
> > cluster. This is visible in the address space of the ZynqMP processor
> > at the address indicated here."
> >
> > That would make sense, but please confirm/update.
> >
> 
> Yes the TCM address space as noted in the TCM device tree nodes (e.g.
> 0xffe00000) is visible at that address on the A53 cluster.  Will fix
> 
> > > +  memory-region:
> > > +    description:
> > > +      collection of memory carveouts used for elf-loading and inter-
> processor
> > > +      communication. each carveout in this case should be in DDR, not
> > > +      chip-specific memory. In Xilinx case, this is TCM, OCM, BRAM, etc.
> > > +    $ref: /schemas/types.yaml#/definitions/phandle-array
> >
> > This is nice, you're reusing the infrastructure we already
> > have for these carveouts, good design!
> >
> > > +  meta-memory-regions:
> > > +    description:
> > > +      collection of memories that are not present in the top level memory
> > > +      nodes' mapping. For example, R5s' TCM banks. These banks are
> needed
> > > +      for R5 firmware meta data such as the R5 firmware's heap and stack.
> > > +      To be more precise, this is on-chip reserved SRAM regions, e.g. TCM,
> > > +      BRAM, OCM, etc.
> > > +    $ref: /schemas/types.yaml#/definitions/phandle-array
> >
> > Is this in the memory space of the main CPU cluster?
> >
> > It sure looks like that.
> >
> 
> Yes this is in the memory space of the A53 cluster
> 
> Will fix to comment as such. Thank you
> 
> > > +     /*
> > > +      * Below nodes are required if using TCM to load R5 firmware
> > > +      * if not, then either do not provide nodes are label as disabled in
> > > +      * status property
> > > +      */
> > > +     tcm0a: tcm_0a@ffe00000 {
> > > +         reg = <0xffe00000 0x10000>;
> > > +         pnode-id = <0xf>;
> > > +         no-map;
> > > +         status = "okay";
> > > +         phandle = <0x40>;
> > > +     };
> > > +     tcm0b: tcm_1a@ffe20000 {
> > > +         reg = <0xffe20000 0x10000>;
> > > +         pnode-id = <0x10>;
> > > +         no-map;
> > > +         status = "okay";
> > > +         phandle = <0x41>;
> > > +     };
> >
> > All right so this looks suspicious to me. Please explain
> > what we are seeing in those reg entries? Is this the address
> > seen by the main CPU cluster?
> >
> > Does it mean that the main CPU see the memory of the
> > R5 as "some kind of TCM" and that TCM is physically
> > mapped at 0xffe00000 (ITCM) and 0xffe20000 (DTCM)?
> >
> > If the first is ITCM and the second DTCM that is pretty
> > important to point out, since this reflects the harvard
> > architecture properties of these two memory areas.
> >
> > The phandle = thing I do not understand at all, but
> > maybe there is generic documentation for it that
> > I've missed?
> >
> > Last time I checked (which was on the ARM32) the physical
> > address of the ITCM and DTCM could be changed at
> > runtime with CP15 instructions. I might be wrong
> > about this, but if that (or something similar) is still the case
> > you can't just say hardcode these addresses here, the
> > CPU can move that physical address somewhere else.
> > See the code in
> > arch/arm/kernel/tcm.c
> >
> > It appears the ARM64 Linux kernel does not have any
> > TCM handling today, but that could change.
> >
> > So is this just regular ARM TCM memory (as seen by
> > the main ARM64 cluster)?
> >
> > If this is the case, you should probably add back the
> > compatible string, add a separate device tree binding
> > for TCM memories along the lines of
> > compatible = "arm,itcm";
> > compatible = "arm,dtcm";
> > The reg address should then ideally be interpreted by
> > the ARM64 kernel and assigned to the I/DTCM.
> >
> > I'm paging Catalin on this because I do not know if
> > ARM64 really has [I|D]TCM or if this is some invention
> > of Xilinx's.
> >
> > Yours,
> > Linus Walleij
> 
> As you said, this is  just regular ARM TCM memory (as seen by the main
> ARM64 cluster). Yes I can add back the compatible string, though maybe just
> "tcm" or "xlnx,tcm" though there is also tcm compat string for the TI
> remoteproc driver later listed below
> 
> So I think I answered this above, but the APU (a53 cluster) sees the TCM
> banks (referred to on Zynq UltraScale+) at TCM banks 0A and 0B as 0xffe00000
> and 0xffe20000 respectively and TCM banks 1A and 1B at 0xffe90000 and
> 0xffeb0000. Also it is similar to the TI k3 R5 Remoteproc driver binding for
> reference:
> - https://patchwork.kernel.org/cover/11763783/
> 
> The phandle array here is to serve as a later for these nodes so that later on,
> in the Remoteproc R5 driver, these can be referenced to get some information
> via the pnode-id property.
> 
> The reason we have the compatible, reg, etc. is for System DT effort +
> @Stefano Stabellini
> 
> That being said, we can change this around to couple the TCM bank nodes
> into the R5 as we have In our present, internal implementation at
> - https://github.com/Xilinx/linux-
> xlnx/blob/master/Documentation/devicetree/bindings/remoteproc/xilinx%2C
> zynqmp-r5-remoteproc.txt
> - https://github.com/Xilinx/linux-
> xlnx/blob/master/drivers/remoteproc/zynqmp_r5_remoteproc.c
> the TCM nodes are coupled in the R5 but after some previous review on this
> list, it was moved to have the TCM nodes decoupled from the R5 node
> 
> I am not sure what you mean on the Arm64 handling of TCM memory. Via the
> architecture of the SoC
> https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-
> ultrascale-trm.pdf I know that the A53 cluster can see the absolute addresses
> of the R5 cluster so the translation is *I think* done as you describe with the
> CP15 instructions you listed.
> 

Sorry the above is unclear! editing this for clarity:
The phandle in the TCM node is referenced in the "meta-memory-regions" property. It is used as TCM, BRAM, OCM or other SoC-specific on-chip memories may be used for firmware loading other than the generic DDR.

The TCM nodes' originally had a compatible string, but was removed after a comment from the device tree maintainer Rob H. commented that TCM was not specific to Xilinx. With that in mind, I am not sure if the "arm,itcm" or "arm,dtcm" are appropriate.

The addresses for the TCM banks are addressable with these values provided in the reg via a physical bus not an instruction AFAIK so the CP15 instruction is not used.

Apologies again for the previously ambiguous response and hope this addresses your concerns,
Ben
Stefano Stabellini Oct. 8, 2020, 8:22 p.m. UTC | #4
On Thu, 8 Oct 2020, Ben Levinsky wrote:
> > Does it mean that the main CPU see the memory of the
> > R5 as "some kind of TCM" and that TCM is physically
> > mapped at 0xffe00000 (ITCM) and 0xffe20000 (DTCM)?
> > 
> > If the first is ITCM and the second DTCM that is pretty
> > important to point out, since this reflects the harvard
> > architecture properties of these two memory areas.

Hi Linus,

I don't think Xilinx TCMs are split in ITCM and DTCM in the way you
describe here: https://www.kernel.org/doc/Documentation/arm/tcm.txt.
Either TCM could be used for anything. See
https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
at page 82.
Linus Walleij Oct. 8, 2020, 8:54 p.m. UTC | #5
On Thu, Oct 8, 2020 at 4:21 PM Ben Levinsky <BLEVINSK@xilinx.com> wrote:

> As you said, this is  just regular ARM TCM memory (as seen by the main ARM64 cluster).
> Yes I can add back the compatible string, though maybe just "tcm" or "xlnx,tcm"

I mean that if it is an ARM standard feature it should be prefixed "arm,"

> That being said, we can change this around to couple the TCM bank nodes into the R5 as we have In our present, internal implementation at
> - https://github.com/Xilinx/linux-xlnx/blob/master/Documentation/devicetree/bindings/remoteproc/xilinx%2Czynqmp-r5-remoteproc.txt
> - https://github.com/Xilinx/linux-xlnx/blob/master/drivers/remoteproc/zynqmp_r5_remoteproc.c
> the TCM nodes are coupled in the R5 but after some previous review on this list, it was moved to have the TCM nodes decoupled from the R5 node
>
> I am not sure what you mean on the Arm64 handling of TCM memory. Via the architecture of the SoC https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf I know that the A53 cluster can see the absolute addresses of the R5 cluster so the translation is *I think* done as you describe with the CP15 instructions you listed.

It seems like the TCM memories are von Neumann type (either can
contain both code and
data) which removes one of my worries. According to this data sheet
they are also nothing
ARM-provided but a Xilinx invention, and just happen to be hardcoded
at the same address
as TCM memories in ARM32, what a coincidence.

So I take it this is the correct view and you can proceed like this,
sorry for the fuzz.

Yours,
Linus Walleij
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
new file mode 100644
index 000000000000..c202dca3b6d0
--- /dev/null
+++ b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
@@ -0,0 +1,142 @@ 
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Xilinx R5 remote processor controller bindings
+
+description:
+  This document defines the binding for the remoteproc component that loads and
+  boots firmwares on the Xilinx Zynqmp and Versal family chipset.
+
+  Note that the Linux has global addressing view of the R5-related memory (TCM)
+  so the absolute address ranges are provided in TCM reg's.
+
+maintainers:
+  - Ed Mooring <ed.mooring@xilinx.com>
+  - Ben Levinsky <ben.levinsky@xilinx.com>
+
+properties:
+  compatible:
+    const: xlnx,zynqmp-r5-remoteproc
+
+  lockstep-mode:
+    description:
+      If this property is present, then the configuration is lock-step.
+      Otherwise RPU is split.
+    type: boolean
+    maxItems: 1
+
+  interrupts:
+    description:
+      Interrupt mapping for remoteproc IPI. It is required if the
+      user uses the remoteproc driver with the RPMsg kernel driver.
+    maxItems: 6
+
+  memory-region:
+    description:
+      collection of memory carveouts used for elf-loading and inter-processor
+      communication. each carveout in this case should be in DDR, not
+      chip-specific memory. In Xilinx case, this is TCM, OCM, BRAM, etc.
+    $ref: /schemas/types.yaml#/definitions/phandle-array
+
+  meta-memory-regions:
+    description:
+      collection of memories that are not present in the top level memory
+      nodes' mapping. For example, R5s' TCM banks. These banks are needed
+      for R5 firmware meta data such as the R5 firmware's heap and stack.
+      To be more precise, this is on-chip reserved SRAM regions, e.g. TCM,
+      BRAM, OCM, etc.
+    $ref: /schemas/types.yaml#/definitions/phandle-array
+
+  pnode-id:
+    maxItems: 1
+    description:
+      power node id that is used to uniquely identify the node for Xilinx
+      Power Management. The value is then passed to Xilinx platform
+      manager for power on/off and access.
+    $ref: /schemas/types.yaml#/definitions/uint32
+
+  mboxes:
+    description:
+      array of phandles that describe the rx and tx for xilinx zynqmp
+      mailbox driver. order of rx and tx is described by the mbox-names
+      property. This will be used for communication with remote
+      processor.
+    maxItems: 2
+
+  mbox-names:
+    description:
+      array of strings that denote which item in the mboxes property array
+      are the rx and tx for xilinx zynqmp mailbox driver
+    maxItems: 2
+    $ref: /schemas/types.yaml#/definitions/string-array
+
+
+examples:
+  - |
+     reserved-memory {
+          #address-cells = <1>;
+          #size-cells = <1>;
+          ranges;
+          elf_load: rproc@3ed000000 {
+               no-map;
+               reg = <0x3ed00000 0x40000>;
+          };
+
+          rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 {
+               no-map;
+               reg = <0x3ed40000 0x4000>;
+          };
+          rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 {
+               no-map;
+               reg = <0x3ed44000 0x4000>;
+          };
+          rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 {
+               no-map;
+               reg = <0x3ed48000 0x100000>;
+          };
+
+     };
+
+     /*
+      * Below nodes are required if using TCM to load R5 firmware
+      * if not, then either do not provide nodes are label as disabled in
+      * status property
+      */
+     tcm0a: tcm_0a@ffe00000 {
+         reg = <0xffe00000 0x10000>;
+         pnode-id = <0xf>;
+         no-map;
+         status = "okay";
+         phandle = <0x40>;
+     };
+     tcm0b: tcm_1a@ffe20000 {
+         reg = <0xffe20000 0x10000>;
+         pnode-id = <0x10>;
+         no-map;
+         status = "okay";
+         phandle = <0x41>;
+     };
+
+     rpu {
+          compatible = "xlnx,zynqmp-r5-remoteproc";
+          #address-cells = <1>;
+          #size-cells = <1>;
+          ranges;
+          lockstep-mode;
+          r5_0 {
+               ranges;
+               #address-cells = <1>;
+               #size-cells = <1>;
+               memory-region = <&elf_load>,
+                               <&rpu0vdev0vring0>,
+                               <&rpu0vdev0vring1>,
+                               <&rpu0vdev0buffer>;
+               meta-memory-regions = <&tcm_0a>, <&tcm_0b>;
+               pnode-id = <0x7>;
+          };
+     };
+
+...