mbox series

[v5,00/21] nvmem: core: introduce NVMEM layouts

Message ID 20221206200740.3567551-1-michael@walle.cc
Headers show
Series nvmem: core: introduce NVMEM layouts | expand

Message

Michael Walle Dec. 6, 2022, 8:07 p.m. UTC
This is now the third attempt to fetch the MAC addresses from the VPD
for the Kontron sl28 boards. Previous discussions can be found here:
https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/


NVMEM cells are typically added by board code or by the devicetree. But
as the cells get more complex, there is (valid) push back from the
devicetree maintainers to not put that handling in the devicetree.

Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
can add cells during runtime. That way it is possible to add more complex
cells than it is possible right now with the offset/length/bits
description in the device tree. For example, you can have post processing
for individual cells (think of endian swapping, or ethernet offset
handling).

The imx-ocotp driver is the only user of the global post processing hook,
convert it to nvmem layouts and drop the global post pocessing hook.

For now, the layouts are selected by the device tree. But the idea is
that also board files or other drivers could set a layout. Although no
code for that exists yet.

Thanks to Miquel, the device tree bindings are already approved and merged.

NVMEM layouts as modules?
While possible in principle, it doesn't make any sense because the NVMEM
core can't be compiled as a module. The layouts needs to be available at
probe time. (That is also the reason why they get registered with
subsys_initcall().) So if the NVMEM core would be a module, the layouts
could be modules, too.

Michael Walle (19):
  net: add helper eth_addr_add()
  of: base: add of_parse_phandle_with_optional_args()
  of: property: make #.*-cells optional for simple props
  of: property: add #nvmem-cell-cells property
  nvmem: core: fix device node refcounting
  nvmem: core: add an index parameter to the cell
  nvmem: core: move struct nvmem_cell_info to nvmem-provider.h
  nvmem: core: drop the removal of the cells in nvmem_add_cells()
  nvmem: core: fix cell removal on error
  nvmem: core: add nvmem_add_one_cell()
  nvmem: core: use nvmem_add_one_cell() in nvmem_add_cells_from_of()
  nvmem: core: introduce NVMEM layouts
  nvmem: core: add per-cell post processing
  nvmem: core: allow to modify a cell before adding it
  nvmem: imx-ocotp: replace global post processing with layouts
  nvmem: cell: drop global cell_post_process
  nvmem: core: provide own priv pointer in post process callback
  nvmem: layouts: add sl28vpd layout
  MAINTAINERS: add myself as sl28vpd nvmem layout driver

Miquel Raynal (2):
  nvmem: layouts: Add ONIE tlv layout driver
  MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer

 Documentation/driver-api/nvmem.rst |  15 ++
 MAINTAINERS                        |  12 ++
 drivers/nvmem/Kconfig              |   4 +
 drivers/nvmem/Makefile             |   1 +
 drivers/nvmem/core.c               | 295 +++++++++++++++++++++--------
 drivers/nvmem/imx-ocotp.c          |  34 ++--
 drivers/nvmem/layouts/Kconfig      |  23 +++
 drivers/nvmem/layouts/Makefile     |   7 +
 drivers/nvmem/layouts/onie-tlv.c   | 244 ++++++++++++++++++++++++
 drivers/nvmem/layouts/sl28vpd.c    | 153 +++++++++++++++
 drivers/of/property.c              |   6 +-
 include/linux/etherdevice.h        |  14 ++
 include/linux/nvmem-consumer.h     |  17 +-
 include/linux/nvmem-provider.h     |  95 +++++++++-
 include/linux/of.h                 |  25 +++
 15 files changed, 837 insertions(+), 108 deletions(-)
 create mode 100644 drivers/nvmem/layouts/Kconfig
 create mode 100644 drivers/nvmem/layouts/Makefile
 create mode 100644 drivers/nvmem/layouts/onie-tlv.c
 create mode 100644 drivers/nvmem/layouts/sl28vpd.c

Comments

Miquel Raynal Jan. 3, 2023, 3:39 p.m. UTC | #1
Hi Srinivas,

michael@walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:

> This is now the third attempt to fetch the MAC addresses from the VPD
> for the Kontron sl28 boards. Previous discussions can be found here:
> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
> 
> 
> NVMEM cells are typically added by board code or by the devicetree. But
> as the cells get more complex, there is (valid) push back from the
> devicetree maintainers to not put that handling in the devicetree.
> 
> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
> can add cells during runtime. That way it is possible to add more complex
> cells than it is possible right now with the offset/length/bits
> description in the device tree. For example, you can have post processing
> for individual cells (think of endian swapping, or ethernet offset
> handling).
> 
> The imx-ocotp driver is the only user of the global post processing hook,
> convert it to nvmem layouts and drop the global post pocessing hook.
> 
> For now, the layouts are selected by the device tree. But the idea is
> that also board files or other drivers could set a layout. Although no
> code for that exists yet.
> 
> Thanks to Miquel, the device tree bindings are already approved and merged.
> 
> NVMEM layouts as modules?
> While possible in principle, it doesn't make any sense because the NVMEM
> core can't be compiled as a module. The layouts needs to be available at
> probe time. (That is also the reason why they get registered with
> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> could be modules, too.

I believe this series still applies even though -rc1 (and -rc2) are out
now, may we know if you consider merging it anytime soon or if there
are still discrepancies in the implementation you would like to
discuss? Otherwise I would really like to see this laying in -next a
few weeks before being sent out to Linus, just in case.

Thanks,
Miquèl
Srinivas Kandagatla Jan. 3, 2023, 3:51 p.m. UTC | #2
Hi Miquel,

On 03/01/2023 15:39, Miquel Raynal wrote:
> Hi Srinivas,
> 
> michael@walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:
> 
>> This is now the third attempt to fetch the MAC addresses from the VPD
>> for the Kontron sl28 boards. Previous discussions can be found here:
>> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
>>
>>
>> NVMEM cells are typically added by board code or by the devicetree. But
>> as the cells get more complex, there is (valid) push back from the
>> devicetree maintainers to not put that handling in the devicetree.
>>
>> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
>> can add cells during runtime. That way it is possible to add more complex
>> cells than it is possible right now with the offset/length/bits
>> description in the device tree. For example, you can have post processing
>> for individual cells (think of endian swapping, or ethernet offset
>> handling).
>>
>> The imx-ocotp driver is the only user of the global post processing hook,
>> convert it to nvmem layouts and drop the global post pocessing hook.
>>
>> For now, the layouts are selected by the device tree. But the idea is
>> that also board files or other drivers could set a layout. Although no
>> code for that exists yet.
>>
>> Thanks to Miquel, the device tree bindings are already approved and merged.
>>
>> NVMEM layouts as modules?
>> While possible in principle, it doesn't make any sense because the NVMEM
>> core can't be compiled as a module. The layouts needs to be available at
>> probe time. (That is also the reason why they get registered with
>> subsys_initcall().) So if the NVMEM core would be a module, the layouts
>> could be modules, too.
> 
> I believe this series still applies even though -rc1 (and -rc2) are out
> now, may we know if you consider merging it anytime soon or if there
> are still discrepancies in the implementation you would like to
> discuss? Otherwise I would really like to see this laying in -next a
> few weeks before being sent out to Linus, just in case.

Thanks for the work!

Lets get some testing in -next.


Applied now,





--srini
> 
> Thanks,
> Miquèl
Miquel Raynal Jan. 3, 2023, 3:58 p.m. UTC | #3
Hi Srinivas,

srinivas.kandagatla@linaro.org wrote on Tue, 3 Jan 2023 15:51:31 +0000:

> Hi Miquel,
> 
> On 03/01/2023 15:39, Miquel Raynal wrote:
> > Hi Srinivas,
> > 
> > michael@walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:
> >   
> >> This is now the third attempt to fetch the MAC addresses from the VPD
> >> for the Kontron sl28 boards. Previous discussions can be found here:
> >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
> >>
> >>
> >> NVMEM cells are typically added by board code or by the devicetree. But
> >> as the cells get more complex, there is (valid) push back from the
> >> devicetree maintainers to not put that handling in the devicetree.
> >>
> >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
> >> can add cells during runtime. That way it is possible to add more complex
> >> cells than it is possible right now with the offset/length/bits
> >> description in the device tree. For example, you can have post processing
> >> for individual cells (think of endian swapping, or ethernet offset
> >> handling).
> >>
> >> The imx-ocotp driver is the only user of the global post processing hook,
> >> convert it to nvmem layouts and drop the global post pocessing hook.
> >>
> >> For now, the layouts are selected by the device tree. But the idea is
> >> that also board files or other drivers could set a layout. Although no
> >> code for that exists yet.
> >>
> >> Thanks to Miquel, the device tree bindings are already approved and merged.
> >>
> >> NVMEM layouts as modules?
> >> While possible in principle, it doesn't make any sense because the NVMEM
> >> core can't be compiled as a module. The layouts needs to be available at
> >> probe time. (That is also the reason why they get registered with
> >> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> >> could be modules, too.  
> > 
> > I believe this series still applies even though -rc1 (and -rc2) are out
> > now, may we know if you consider merging it anytime soon or if there
> > are still discrepancies in the implementation you would like to
> > discuss? Otherwise I would really like to see this laying in -next a
> > few weeks before being sent out to Linus, just in case.  
> 
> Thanks for the work!
> 
> Lets get some testing in -next.
> 
> 
> Applied now,

Excellent! Thanks a lot for the quick answer and thanks for applying,
let's see how it behaves.

Thanks,
Miquèl
Alexander Stein Jan. 5, 2023, 11:04 a.m. UTC | #4
Am Dienstag, 3. Januar 2023, 16:51:31 CET schrieb Srinivas Kandagatla:
> Hi Miquel,
> 
> On 03/01/2023 15:39, Miquel Raynal wrote:
> > Hi Srinivas,
> > 
> > michael@walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:
> >> This is now the third attempt to fetch the MAC addresses from the VPD
> >> for the Kontron sl28 boards. Previous discussions can be found here:
> >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
> >> 
> >> 
> >> NVMEM cells are typically added by board code or by the devicetree. But
> >> as the cells get more complex, there is (valid) push back from the
> >> devicetree maintainers to not put that handling in the devicetree.
> >> 
> >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
> >> can add cells during runtime. That way it is possible to add more complex
> >> cells than it is possible right now with the offset/length/bits
> >> description in the device tree. For example, you can have post processing
> >> for individual cells (think of endian swapping, or ethernet offset
> >> handling).
> >> 
> >> The imx-ocotp driver is the only user of the global post processing hook,
> >> convert it to nvmem layouts and drop the global post pocessing hook.
> >> 
> >> For now, the layouts are selected by the device tree. But the idea is
> >> that also board files or other drivers could set a layout. Although no
> >> code for that exists yet.
> >> 
> >> Thanks to Miquel, the device tree bindings are already approved and
> >> merged.
> >> 
> >> NVMEM layouts as modules?
> >> While possible in principle, it doesn't make any sense because the NVMEM
> >> core can't be compiled as a module. The layouts needs to be available at
> >> probe time. (That is also the reason why they get registered with
> >> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> >> could be modules, too.
> > 
> > I believe this series still applies even though -rc1 (and -rc2) are out
> > now, may we know if you consider merging it anytime soon or if there
> > are still discrepancies in the implementation you would like to
> > discuss? Otherwise I would really like to see this laying in -next a
> > few weeks before being sent out to Linus, just in case.
> 
> Thanks for the work!
> 
> Lets get some testing in -next.

This causes the following errors on existing boards (imx8mq-tqma8mq-
mba8mx.dtb):
root@tqma8-common:~# uname -r
6.2.0-rc2-next-20230105

> OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/
efuse@30350000/soc-uid@4
> OF: /soc@0/bus@30800000/ethernet@30be0000: could not get #nvmem-cell-cells 
for /soc@0/bus@30000000/efuse@30350000/mac-address@90

These are caused because '#nvmem-cell-cells = <0>;' is not explicitly set in 
DT.

> TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell 
io_impedance_ctrl
> TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22

These are caused because of_nvmem_cell_get() now returns -EINVAL instead of -
ENODEV if the requested nvmem cell is not available.

Best regards,
Alexander
Miquel Raynal Jan. 5, 2023, 11:35 a.m. UTC | #5
Hello,

alexander.stein@ew.tq-group.com wrote on Thu, 05 Jan 2023 12:04:52
+0100:

> Am Dienstag, 3. Januar 2023, 16:51:31 CET schrieb Srinivas Kandagatla:
> > Hi Miquel,
> > 
> > On 03/01/2023 15:39, Miquel Raynal wrote:  
> > > Hi Srinivas,
> > > 
> > > michael@walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:  
> > >> This is now the third attempt to fetch the MAC addresses from the VPD
> > >> for the Kontron sl28 boards. Previous discussions can be found here:
> > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
> > >> 
> > >> 
> > >> NVMEM cells are typically added by board code or by the devicetree. But
> > >> as the cells get more complex, there is (valid) push back from the
> > >> devicetree maintainers to not put that handling in the devicetree.
> > >> 
> > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
> > >> can add cells during runtime. That way it is possible to add more complex
> > >> cells than it is possible right now with the offset/length/bits
> > >> description in the device tree. For example, you can have post processing
> > >> for individual cells (think of endian swapping, or ethernet offset
> > >> handling).
> > >> 
> > >> The imx-ocotp driver is the only user of the global post processing hook,
> > >> convert it to nvmem layouts and drop the global post pocessing hook.
> > >> 
> > >> For now, the layouts are selected by the device tree. But the idea is
> > >> that also board files or other drivers could set a layout. Although no
> > >> code for that exists yet.
> > >> 
> > >> Thanks to Miquel, the device tree bindings are already approved and
> > >> merged.
> > >> 
> > >> NVMEM layouts as modules?
> > >> While possible in principle, it doesn't make any sense because the NVMEM
> > >> core can't be compiled as a module. The layouts needs to be available at
> > >> probe time. (That is also the reason why they get registered with
> > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> > >> could be modules, too.  
> > > 
> > > I believe this series still applies even though -rc1 (and -rc2) are out
> > > now, may we know if you consider merging it anytime soon or if there
> > > are still discrepancies in the implementation you would like to
> > > discuss? Otherwise I would really like to see this laying in -next a
> > > few weeks before being sent out to Linus, just in case.  
> > 
> > Thanks for the work!
> > 
> > Lets get some testing in -next.  
> 
> This causes the following errors on existing boards (imx8mq-tqma8mq-
> mba8mx.dtb):
> root@tqma8-common:~# uname -r
> 6.2.0-rc2-next-20230105
> 
> > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/  
> efuse@30350000/soc-uid@4
> > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get #nvmem-cell-cells   
> for /soc@0/bus@30000000/efuse@30350000/mac-address@90
> 
> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly set in 
> DT.
> 
> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell   
> io_impedance_ctrl
> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22  
> 
> These are caused because of_nvmem_cell_get() now returns -EINVAL instead of -
> ENODEV if the requested nvmem cell is not available.

Should we just assume #nvmem-cell-cells = <0> by default? I guess it's
a safe assumption.

Thanks,
Miquèl
Michael Walle Jan. 5, 2023, 12:11 p.m. UTC | #6
Hi Alexander,

thanks for debugging. I'm not yet sure what is going wrong, so
I have some more questions below.

>> This causes the following errors on existing boards (imx8mq-tqma8mq-
>> mba8mx.dtb):
>> root@tqma8-common:~# uname -r
>> 6.2.0-rc2-next-20230105
>> 
>> > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/
>> efuse@30350000/soc-uid@4
>> > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get #nvmem-cell-cells
>> for /soc@0/bus@30000000/efuse@30350000/mac-address@90
>> 
>> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly 
>> set in
>> DT.
>> 
>> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell
>> io_impedance_ctrl
>> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22
>> 
>> These are caused because of_nvmem_cell_get() now returns -EINVAL 
>> instead of -
>> ENODEV if the requested nvmem cell is not available.

What do you mean with not available? Not yet available because of probe
order?

> Should we just assume #nvmem-cell-cells = <0> by default? I guess it's
> a safe assumption.

Actually, that's what patch 2/21 is for.

Alexander, did you verify that the EINVAL is returned by
of_parse_phandle_with_optional_args()?

-michael
Alexander Stein Jan. 5, 2023, 1:22 p.m. UTC | #7
Hi Michael,

Am Donnerstag, 5. Januar 2023, 13:51:53 CET schrieb Michael Walle:
> Hi,
> 
> Am 2023-01-05 13:21, schrieb Alexander Stein:
> > Am Donnerstag, 5. Januar 2023, 13:11:37 CET schrieb Michael Walle:
> >> thanks for debugging. I'm not yet sure what is going wrong, so
> >> I have some more questions below.
> >> 
> >> >> This causes the following errors on existing boards (imx8mq-tqma8mq-
> >> >> mba8mx.dtb):
> >> >> root@tqma8-common:~# uname -r
> >> >> 6.2.0-rc2-next-20230105
> >> >> 
> >> >> > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/
> >> >> 
> >> >> efuse@30350000/soc-uid@4
> >> >> 
> >> >> > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get
> >> >> > #nvmem-cell-cells
> >> >> 
> >> >> for /soc@0/bus@30000000/efuse@30350000/mac-address@90
> >> >> 
> >> >> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly
> >> >> set in
> >> >> DT.
> >> >> 
> >> >> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get
> >> >> > nvmem
> >> >> > cell
> >> >> 
> >> >> io_impedance_ctrl
> >> >> 
> >> >> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22
> >> >> 
> >> >> These are caused because of_nvmem_cell_get() now returns -EINVAL
> >> >> instead of -
> >> >> ENODEV if the requested nvmem cell is not available.
> >> 
> >> What do you mean with not available? Not yet available because of
> >> probe
> >> order?
> > 
> > Ah, I was talking about there is no nvmem cell being used in my PHY
> > node, e.g.
> > no 'nvmem-cells' nor 'nvmem-cell-names' (set to 'io_impedance_ctrl').
> > That's
> > why of_property_match_string returns -EINVAL.
> 
> Ahh I see. You mean ENOENT instead of ENODEV, right?

Yeah you are right here, ENOENT is the one missing.

> >> > Should we just assume #nvmem-cell-cells = <0> by default? I guess it's
> >> > a safe assumption.
> >> 
> >> Actually, that's what patch 2/21 is for.
> >> 
> >> Alexander, did you verify that the EINVAL is returned by
> >> of_parse_phandle_with_optional_args()?
> > 
> > Yep.
> > 
> > --8<--
> > diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> > index 1b61c8bf0de4..f2a85a31d039 100644
> > --- a/drivers/nvmem/core.c
> > +++ b/drivers/nvmem/core.c
> > @@ -1339,9 +1339,11 @@ struct nvmem_cell *of_nvmem_cell_get(struct
> > device_node
> > *np, const char *id)
> > 
> >         if (id)
> >         
> >                 index = of_property_match_string(np,
> > 
> > "nvmem-cell-names", id);
> > 
> > +       pr_info("%s: index: %d\n", __func__, index);
> > 
> >         ret = of_parse_phandle_with_optional_args(np, "nvmem-cells",
> >         
> >                                                   "#nvmem-cell-cells",
> >                                                   index, &cell_spec);
> > 
> > +       pr_info("%s: of_parse_phandle_with_optional_args: %d\n",
> > __func__,
> > ret);
> > 
> >         if (ret)
> >         
> >                 return ERR_PTR(ret);
> > 
> > --8<--
> > 
> > Results in:
> >> [    1.861896] of_nvmem_cell_get: index: -22
> >> [    1.865934] of_nvmem_cell_get: of_parse_phandle_with_optional_args:
> >> -22
> >> [    1.872595] TI DP83867 30be0000.ethernet-1:0e: error -EINVAL:
> >> failed to
> > 
> > get nvmem cell io_impedance_ctrl
> > 
> >> [    2.402575] TI DP83867: probe of 30be0000.ethernet-1:0e failed with
> >> error
> > 
> > -22
> > 
> > So, the index is wrong in the first place, but this was no problem
> > until now.
> 
> Thanks, could you try the following patch:
> 
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index 1b61c8bf0de4..1085abfcd9b1 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -1336,8 +1336,11 @@ struct nvmem_cell *of_nvmem_cell_get(struct
> device_node *np, const char *id)
>          int ret;
> 
>          /* if cell name exists, find index to the name */
> -       if (id)
> +       if (id) {
>                  index = of_property_match_string(np, "nvmem-cell-names",
> id);
> +               if (index < 0)
> +                       return ERR_PTR(-ENOENT);
> +       }
> 
>          ret = of_parse_phandle_with_optional_args(np, "nvmem-cells",
>                                                    "#nvmem-cell-cells",
> 
> Before patch 6/21, the -EINVAL was passed as index to of_parse_phandle()
> which then returned NULL, which caused the nvmem core to return ENOENT.
> I have a vague memory, that I made sure, that
> of_parse_phandle_with_optional_args() will also propagate the
> wrong index to its return code. But now, it won't be converted
> to ENOENT.

Yes, this does the trick. Thanks

Best regards,
Alexander
Srinivas Kandagatla Feb. 6, 2023, 8:31 p.m. UTC | #8
Hi Michael/Miquel,

I had to revert Layout patches due to comments from Greg about Making 
the layouts as built-in rather than modules, he is not ready to merge 
them as it is.

His original comment,

"Why are we going back to "custom-built" kernel configurations?  Why can
this not be a loadable module?  Distros are now forced to enable these
layout and all kernels will have this dead code in the tree without any
choice in the matter?

That's not ok, these need to be auto-loaded based on the hardware
representation like any other kernel module.  You can't force them to be
always present, sorry.
"

I have applied most of the patches except

nvmem: core: introduce NVMEM layouts
nvmem: core: add per-cell post processing
nvmem: core: allow to modify a cell before adding it
nvmem: imx-ocotp: replace global post processing with layouts
nvmem: cell: drop global cell_post_process
nvmem: core: provide own priv pointer in post process callback
nvmem: layouts: add sl28vpd layout
MAINTAINERS: add myself as sl28vpd nvmem layout driver
nvmem: layouts: Add ONIE tlv layout driver
MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer
nvmem: core: return -ENOENT if nvmem cell is not found
nvmem: layouts: Fix spelling mistake "platforn" -> "platform"
dt-bindings: nvmem: Fix spelling mistake "platforn" -> "platform"
nvmem: core: fix nvmem_layout_get_match_data()

Please rebase your patches on top of nvmem-next once layouts are 
converted to loadable modules.

thanks,
srini



On 03/01/2023 15:39, Miquel Raynal wrote:
> Hi Srinivas,
> 
> michael@walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:
> 
>> This is now the third attempt to fetch the MAC addresses from the VPD
>> for the Kontron sl28 boards. Previous discussions can be found here:
>> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
>>
>>
>> NVMEM cells are typically added by board code or by the devicetree. But
>> as the cells get more complex, there is (valid) push back from the
>> devicetree maintainers to not put that handling in the devicetree.
>>
>> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
>> can add cells during runtime. That way it is possible to add more complex
>> cells than it is possible right now with the offset/length/bits
>> description in the device tree. For example, you can have post processing
>> for individual cells (think of endian swapping, or ethernet offset
>> handling).
>>
>> The imx-ocotp driver is the only user of the global post processing hook,
>> convert it to nvmem layouts and drop the global post pocessing hook.
>>
>> For now, the layouts are selected by the device tree. But the idea is
>> that also board files or other drivers could set a layout. Although no
>> code for that exists yet.
>>
>> Thanks to Miquel, the device tree bindings are already approved and merged.
>>
>> NVMEM layouts as modules?
>> While possible in principle, it doesn't make any sense because the NVMEM
>> core can't be compiled as a module. The layouts needs to be available at
>> probe time. (That is also the reason why they get registered with
>> subsys_initcall().) So if the NVMEM core would be a module, the layouts
>> could be modules, too.
> 
> I believe this series still applies even though -rc1 (and -rc2) are out
> now, may we know if you consider merging it anytime soon or if there
> are still discrepancies in the implementation you would like to
> discuss? Otherwise I would really like to see this laying in -next a
> few weeks before being sent out to Linus, just in case.
> 
> Thanks,
> Miquèl
Miquel Raynal Feb. 6, 2023, 10:47 p.m. UTC | #9
Hi Srinivas,

+ Greg

srinivas.kandagatla@linaro.org wrote on Mon, 6 Feb 2023 20:31:46 +0000:

> Hi Michael/Miquel,
> 
> I had to revert Layout patches due to comments from Greg about Making the layouts as built-in rather than modules, he is not ready to merge them as it is.

Ok this is the second time I see something similar happening:
- maintainer or maintainers group doing the review/apply job and
  sending to "upper" maintainer
- upper maintainer refusing for a "questionable" reason at this stage.

I am not saying the review is incorrect or anything. I'm just wondering
whether, for the second time, I am facing a fair situation, either
myself as a contributor or the intermediate maintainer who's being kind
of bypassed.

What I mean is: the review process has happened. Nothing was hidden,
this series has started leaving on the mailing lists more than two
years ago. The contribution process which has been in place for many
years asks the contributors to send new versions when the review
process leads to comments, which we did. Once the series has been
"accepted" it is expected that this series will be pulled during the
next merge window. If there is something else to fix, there are 6 to 8
long weeks where contributors' fixes are welcome. Why not letting us the
opportunity to use them? Why, for the second time, I am facing an
extremely urgent situation where I have to cancel all my commitments
just because a random comment has been made on a series which has been
standing still for months?

What I would expect instead, is a discussion on the cover letter of the
series where Michael explained why he did no choose to use modules in
the first place. If it appears that for some reason it is best to
enable NVMEM layouts as modules, we will send a timely series on top
of the current one to enable that particular case.

> >> NVMEM layouts as modules?
> >> While possible in principle, it doesn't make any sense because the NVMEM
> >> core can't be compiled as a module. The layouts needs to be available at
> >> probe time. (That is also the reason why they get registered with
> >> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> >> could be modules, too.

I know Michael is busy after the FOSDEM and so am I, so, Greg, would
you accept to take the PR as it is, participate to the discussion and
wait for an update?

Thanks,
Miquèl

> His original comment,
> 
> "Why are we going back to "custom-built" kernel configurations?  Why can
> this not be a loadable module?  Distros are now forced to enable these
> layout and all kernels will have this dead code in the tree without any
> choice in the matter?
> 
> That's not ok, these need to be auto-loaded based on the hardware
> representation like any other kernel module.  You can't force them to be
> always present, sorry.
> "
> 
> I have applied most of the patches except
> 
> nvmem: core: introduce NVMEM layouts
> nvmem: core: add per-cell post processing
> nvmem: core: allow to modify a cell before adding it
> nvmem: imx-ocotp: replace global post processing with layouts
> nvmem: cell: drop global cell_post_process
> nvmem: core: provide own priv pointer in post process callback
> nvmem: layouts: add sl28vpd layout
> MAINTAINERS: add myself as sl28vpd nvmem layout driver
> nvmem: layouts: Add ONIE tlv layout driver
> MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer
> nvmem: core: return -ENOENT if nvmem cell is not found
> nvmem: layouts: Fix spelling mistake "platforn" -> "platform"
> dt-bindings: nvmem: Fix spelling mistake "platforn" -> "platform"
> nvmem: core: fix nvmem_layout_get_match_data()
> 
> Please rebase your patches on top of nvmem-next once layouts are converted to loadable modules.
> 
> thanks,
> srini
> 
> 
> 
> On 03/01/2023 15:39, Miquel Raynal wrote:
> > Hi Srinivas,
> > 
> > michael@walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:
> >   
> >> This is now the third attempt to fetch the MAC addresses from the VPD
> >> for the Kontron sl28 boards. Previous discussions can be found here:
> >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
> >>
> >>
> >> NVMEM cells are typically added by board code or by the devicetree. But
> >> as the cells get more complex, there is (valid) push back from the
> >> devicetree maintainers to not put that handling in the devicetree.
> >>
> >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
> >> can add cells during runtime. That way it is possible to add more complex
> >> cells than it is possible right now with the offset/length/bits
> >> description in the device tree. For example, you can have post processing
> >> for individual cells (think of endian swapping, or ethernet offset
> >> handling).
> >>
> >> The imx-ocotp driver is the only user of the global post processing hook,
> >> convert it to nvmem layouts and drop the global post pocessing hook.
> >>
> >> For now, the layouts are selected by the device tree. But the idea is
> >> that also board files or other drivers could set a layout. Although no
> >> code for that exists yet.
> >>
> >> Thanks to Miquel, the device tree bindings are already approved and merged.
> >>
> >> NVMEM layouts as modules?
> >> While possible in principle, it doesn't make any sense because the NVMEM
> >> core can't be compiled as a module. The layouts needs to be available at
> >> probe time. (That is also the reason why they get registered with
> >> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> >> could be modules, too.  
> > 
> > I believe this series still applies even though -rc1 (and -rc2) are out
> > now, may we know if you consider merging it anytime soon or if there
> > are still discrepancies in the implementation you would like to
> > discuss? Otherwise I would really like to see this laying in -next a
> > few weeks before being sent out to Linus, just in case.
> > 
> > Thanks,
> > Miquèl
Greg Kroah-Hartman Feb. 7, 2023, 6:28 a.m. UTC | #10
On Mon, Feb 06, 2023 at 11:47:13PM +0100, Miquel Raynal wrote:
> Hi Srinivas,
> 
> + Greg
> 
> srinivas.kandagatla@linaro.org wrote on Mon, 6 Feb 2023 20:31:46 +0000:
> 
> > Hi Michael/Miquel,
> > 
> > I had to revert Layout patches due to comments from Greg about Making the layouts as built-in rather than modules, he is not ready to merge them as it is.
> 
> Ok this is the second time I see something similar happening:
> - maintainer or maintainers group doing the review/apply job and
>   sending to "upper" maintainer
> - upper maintainer refusing for a "questionable" reason at this stage.

Only the second time?  You've gotten lucky then :)

This happens all the time based on experience levels of reviewers and
just the very nature of how this whole process works.  It's nothing
unusual and is good overall for the health of the project.  In other
words, this is a a feature, not a bug.

> I am not saying the review is incorrect or anything. I'm just wondering
> whether, for the second time, I am facing a fair situation, either
> myself as a contributor or the intermediate maintainer who's being kind
> of bypassed.
> 
> What I mean is: the review process has happened. Nothing was hidden,
> this series has started leaving on the mailing lists more than two
> years ago. The contribution process which has been in place for many
> years asks the contributors to send new versions when the review
> process leads to comments, which we did. Once the series has been
> "accepted" it is expected that this series will be pulled during the
> next merge window. If there is something else to fix, there are 6 to 8
> long weeks where contributors' fixes are welcome. Why not letting us the
> opportunity to use them? Why, for the second time, I am facing an
> extremely urgent situation where I have to cancel all my commitments
> just because a random comment has been made on a series which has been
> standing still for months?

There's no need to cancel anything, there are no deadlines in kernel
development and I am not asking for any sort of rush whatsoever.

So relax, take a week or two off (or month), and come back with an
updated patch series when you are ready.  And feel free to cc: me on it
if you want my reviews (as I objected to these patches as-is) so that we
don't end up in the same situation (where one maintainer accepted
something, but the maintainer they sent it to rejected it.)

Again, there's no rush, and this is totally normal.

> What I would expect instead, is a discussion on the cover letter of the
> series where Michael explained why he did no choose to use modules in
> the first place. If it appears that for some reason it is best to
> enable NVMEM layouts as modules, we will send a timely series on top
> of the current one to enable that particular case.

Why not rework the existing series to handle this and not require
"fixups" at the end of the series?  We don't normally create bugs and
then fix them up in the same patch set, as you know, so this shouldn't
be treated any differently.

> > >> NVMEM layouts as modules?
> > >> While possible in principle, it doesn't make any sense because the NVMEM
> > >> core can't be compiled as a module. The layouts needs to be available at
> > >> probe time. (That is also the reason why they get registered with
> > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> > >> could be modules, too.
> 
> I know Michael is busy after the FOSDEM and so am I, so, Greg, would
> you accept to take the PR as it is, participate to the discussion and
> wait for an update?

Kernel development doesn't work on "PR" :)

And no, I can't take these, as I don't agree with them, and I totally
imagine others will object for the same reason I did (and then they
would object to me, as the patches would be in my tree, as I am then
responsible for them.)

So send an updated version whenever you have the chance.  Again, there's
no rush, deadline, or anything else here.  Code is accepted when it is
ready and correct, not anytime earlier.

thanks,

greg k-h