mbox series

[v4,00/10] Add the PowerQUICC audio support using the QMC

Message ID 20230126083222.374243-1-herve.codina@bootlin.com
Headers show
Series Add the PowerQUICC audio support using the QMC | expand

Message

Herve Codina Jan. 26, 2023, 8:32 a.m. UTC
Hi,

This series adds support for audio using the QMC controller available in
some Freescale PowerQUICC SoCs.

This series contains three parts in order to show the different blocks
hierarchy and their usage in this support.

The first one is related to TSA (Time Slot Assigner).
The TSA handles the data present at the pin level (TDM with up to 64
time slots) and dispatchs them to one or more serial controller (SCC).

The second is related to QMC (QUICC Multichannel Controller).
The QMC handles the data at the serial controller (SCC) level and splits
again the data to creates some virtual channels.

The last one is related to the audio component (QMC audio).
It is the glue between the QMC controller and the ASoC component. It
handles one or more QMC virtual channels and creates one DAI per QMC
virtual channels handled.

Compared to the previous iteration
  https://lore.kernel.org/linux-kernel/20230113103759.327698-1-herve.codina@bootlin.com/
this v4 series mainly:
  - updates code comment format (feedback received on v2 series and
    forgot to update in v3),
  - fixes bindings,
  - Replaces the fsl,tsa phandle and the fsl,tsa-cell-id property by a
    fsl,tsa-serial phandle and update the TSA related API,
  - Adds some missing lock in tsa_serial_connect() and
    tsa_serial_disconnect().

Best regards,
Herve Codina

Changes v3 -> v4
  - patches 2, 6 and 9
    Update code comment format.

  - patch 1
    Fix some description formats.
    Add 'additionalProperties: false' in subnode.
    Move fsl,mode to fsl,diagnostic-mode.
    Change clocks and clock-names properties.
    Add '#serial-cells' property related to the newly introduced
    fsl,tsa-serial phandle.

  - patch 2
    Move fsl,mode to fsl,diagnostic-mode.
    Replace the	fsl,tsa phandle and the	fsl,tsa-cell-id	property by a
    fsl,tsa-serial phandle and update the related API.
    Add missing locks.

  - patch 5
    Fix some description format.
    Replace the fsl,tsa phandle and the fsl,tsa-cell-id property by a
    fsl,tsa-serial phandle.
    Rename fsl,mode to fsl,operational-mode and update its description.

  - patch 6
    Replace the	fsl,tsa phandle and the	fsl,tsa-cell-id	property by a
    fsl,tsa-serial phandle and use the TSA updated API.
    Rename fsl,mode to fsl,operational-mode.

  - patch 8
    Add 'Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>'

Changes v2 -> v3
  - All bindings
    Rename fsl-tsa.h to fsl,tsa.h
    Add missing vendor prefix
    Various fixes (quotes, node names, upper/lower case)

  - patches 1 and 2 (TSA binding specific)
    Remove 'reserved' values in the routing tables
    Remove fsl,grant-mode
    Add a better description for 'fsl,common-rxtx-pins'
    Fix clocks/clocks-name handling against fsl,common-rxtx-pins
    Add information related to the delays unit
    Removed FSL_CPM_TSA_NBCELL
    Fix license in binding header file fsl,tsa.h

  - patches 5 and 6 (QMC binding specific)
    Remove fsl,cpm-command property
    Add interrupt property constraint

  - patches 8 and 9 (QMC audio binding specific)
    Remove 'items' in compatible property definition
    Add missing 'dai-common.yaml' reference
    Fix the qmc_chan phandle definition

  - patch 2 and 6
    Use io{read,write}be{32,16}
    Change commit subjects and logs

  - patch 4
    Add 'Acked-by: Christophe Leroy <christophe.leroy@csgroup.eu>'

Changes v1 -> v2:
  - patch 2 and 6
    Fix kernel test robot errors

  - other patches
    No changes

Herve Codina (10):
  dt-bindings: soc: fsl: cpm_qe: Add TSA controller
  soc: fsl: cpm1: Add support for TSA
  MAINTAINERS: add the Freescale TSA controller entry
  powerpc/8xx: Use a larger CPM1 command check mask
  dt-bindings: soc: fsl: cpm_qe: Add QMC controller
  soc: fsl: cmp1: Add support for QMC
  MAINTAINERS: add the Freescale QMC controller entry
  dt-bindings: sound: Add support for QMC audio
  ASoC: fsl: Add support for QMC audio
  MAINTAINERS: add the Freescale QMC audio entry

 .../bindings/soc/fsl/cpm_qe/fsl,qmc.yaml      |  167 ++
 .../bindings/soc/fsl/cpm_qe/fsl,tsa.yaml      |  261 +++
 .../bindings/sound/fsl,qmc-audio.yaml         |  117 ++
 MAINTAINERS                                   |   25 +
 arch/powerpc/platforms/8xx/cpm1.c             |    2 +-
 drivers/soc/fsl/qe/Kconfig                    |   23 +
 drivers/soc/fsl/qe/Makefile                   |    2 +
 drivers/soc/fsl/qe/qmc.c                      | 1533 +++++++++++++++++
 drivers/soc/fsl/qe/tsa.c                      |  864 ++++++++++
 drivers/soc/fsl/qe/tsa.h                      |   42 +
 include/dt-bindings/soc/fsl,tsa.h             |   13 +
 include/soc/fsl/qe/qmc.h                      |   71 +
 sound/soc/fsl/Kconfig                         |    9 +
 sound/soc/fsl/Makefile                        |    2 +
 sound/soc/fsl/fsl_qmc_audio.c                 |  735 ++++++++
 15 files changed, 3865 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/devicetree/bindings/soc/fsl/cpm_qe/fsl,qmc.yaml
 create mode 100644 Documentation/devicetree/bindings/soc/fsl/cpm_qe/fsl,tsa.yaml
 create mode 100644 Documentation/devicetree/bindings/sound/fsl,qmc-audio.yaml
 create mode 100644 drivers/soc/fsl/qe/qmc.c
 create mode 100644 drivers/soc/fsl/qe/tsa.c
 create mode 100644 drivers/soc/fsl/qe/tsa.h
 create mode 100644 include/dt-bindings/soc/fsl,tsa.h
 create mode 100644 include/soc/fsl/qe/qmc.h
 create mode 100644 sound/soc/fsl/fsl_qmc_audio.c

Comments

Michael Ellerman Jan. 26, 2023, 9:59 a.m. UTC | #1
Herve Codina <herve.codina@bootlin.com> writes:
> The CPM1 command mask is defined for use with the standard
> CPM1 command register as described in the user's manual:
>   0  |1        3|4    7|8   11|12      14| 15|
>   RST|    -     |OPCODE|CH_NUM|     -    |FLG|
>
> In the QMC extension the CPM1 command register is redefined
> (QMC supplement user's manuel) with the following mapping:
>   0  |1        3|4    7|8           13|14| 15|
>   RST|QMC OPCODE|  1110|CHANNEL_NUMBER| -|FLG|
>
> Extend the check command mask in order to support both the
> standard CH_NUM field and the QMC extension CHANNEL_NUMBER
> field.
>
> Signed-off-by: Herve Codina <herve.codina@bootlin.com>
> Acked-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> ---
>  arch/powerpc/platforms/8xx/cpm1.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)

cheers

> diff --git a/arch/powerpc/platforms/8xx/cpm1.c b/arch/powerpc/platforms/8xx/cpm1.c
> index 8ef1f4392086..6b828b9f90d9 100644
> --- a/arch/powerpc/platforms/8xx/cpm1.c
> +++ b/arch/powerpc/platforms/8xx/cpm1.c
> @@ -100,7 +100,7 @@ int cpm_command(u32 command, u8 opcode)
>  	int i, ret;
>  	unsigned long flags;
>  
> -	if (command & 0xffffff0f)
> +	if (command & 0xffffff03)
>  		return -EINVAL;
>  
>  	spin_lock_irqsave(&cmd_lock, flags);
> -- 
> 2.39.0
Christophe Leroy Feb. 15, 2023, 4:08 p.m. UTC | #2
Hi Li and Qiang

Le 26/01/2023 à 09:32, Herve Codina a écrit :
> The QMC (QUICC Multichannel Controller) emulates up to 64
> channels within one serial controller using the same TDM
> physical interface routed from the TSA.
> 
> It is available in some	PowerQUICC SoC such as the
> MPC885 or MPC866.
> 
> It is also available on some Quicc Engine SoCs.
> This current version support CPM1 SoCs only and some
> enhancement are needed to support Quicc Engine SoCs.

Do you have any comment on this patch ?

Otherwise, may I ask you to send your Acked-by: so that the series can 
be merged in a relevant tree, most likely sound tree ?

Thanks
Christophe

> 
> Signed-off-by: Herve Codina <herve.codina@bootlin.com>
> ---
>   drivers/soc/fsl/qe/Kconfig  |   12 +
>   drivers/soc/fsl/qe/Makefile |    1 +
>   drivers/soc/fsl/qe/qmc.c    | 1533 +++++++++++++++++++++++++++++++++++
>   include/soc/fsl/qe/qmc.h    |   71 ++
>   4 files changed, 1617 insertions(+)
>   create mode 100644 drivers/soc/fsl/qe/qmc.c
>   create mode 100644 include/soc/fsl/qe/qmc.h
> 
> diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig
> index 60ec11c9f4d9..25b218351ae3 100644
> --- a/drivers/soc/fsl/qe/Kconfig
> +++ b/drivers/soc/fsl/qe/Kconfig
> @@ -44,6 +44,18 @@ config CPM_TSA
>   	  This option enables support for this
>   	  controller
>   
> +config CPM_QMC
> +	tristate "CPM QMC support"
> +	depends on OF && HAS_IOMEM
> +	depends on CPM1 || (PPC && COMPILE_TEST)
> +	depends on CPM_TSA
> +	help
> +	  Freescale CPM QUICC Multichannel Controller
> +	  (QMC)
> +
> +	  This option enables support for this
> +	  controller
> +
>   config QE_TDM
>   	bool
>   	default y if FSL_UCC_HDLC
> diff --git a/drivers/soc/fsl/qe/Makefile b/drivers/soc/fsl/qe/Makefile
> index 45c961acc81b..ec8506e13113 100644
> --- a/drivers/soc/fsl/qe/Makefile
> +++ b/drivers/soc/fsl/qe/Makefile
> @@ -5,6 +5,7 @@
>   obj-$(CONFIG_QUICC_ENGINE)+= qe.o qe_common.o qe_ic.o qe_io.o
>   obj-$(CONFIG_CPM)	+= qe_common.o
>   obj-$(CONFIG_CPM_TSA)	+= tsa.o
> +obj-$(CONFIG_CPM_QMC)	+= qmc.o
>   obj-$(CONFIG_UCC)	+= ucc.o
>   obj-$(CONFIG_UCC_SLOW)	+= ucc_slow.o
>   obj-$(CONFIG_UCC_FAST)	+= ucc_fast.o
> diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c
> new file mode 100644
> index 000000000000..cfa7207353e0
> --- /dev/null
> +++ b/drivers/soc/fsl/qe/qmc.c
> @@ -0,0 +1,1533 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * QMC driver
> + *
> + * Copyright 2022 CS GROUP France
> + *
> + * Author: Herve Codina <herve.codina@bootlin.com>
> + */
> +
> +#include <soc/fsl/qe/qmc.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/hdlc.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <soc/fsl/cpm.h>
> +#include <sysdev/fsl_soc.h>
> +#include "tsa.h"
> +
> +/* SCC general mode register high (32 bits) */
> +#define SCC_GSMRL	0x00
> +#define SCC_GSMRL_ENR		(1 << 5)
> +#define SCC_GSMRL_ENT		(1 << 4)
> +#define SCC_GSMRL_MODE_QMC	(0x0A << 0)
> +
> +/* SCC general mode register low (32 bits) */
> +#define SCC_GSMRH	0x04
> +#define   SCC_GSMRH_CTSS	(1 << 7)
> +#define   SCC_GSMRH_CDS		(1 << 8)
> +#define   SCC_GSMRH_CTSP	(1 << 9)
> +#define   SCC_GSMRH_CDP		(1 << 10)
> +
> +/* SCC event register (16 bits) */
> +#define SCC_SCCE	0x10
> +#define   SCC_SCCE_IQOV		(1 << 3)
> +#define   SCC_SCCE_GINT		(1 << 2)
> +#define   SCC_SCCE_GUN		(1 << 1)
> +#define   SCC_SCCE_GOV		(1 << 0)
> +
> +/* SCC mask register (16 bits) */
> +#define SCC_SCCM	0x14
> +/* Multichannel base pointer (32 bits) */
> +#define QMC_GBL_MCBASE		0x00
> +/* Multichannel controller state (16 bits) */
> +#define QMC_GBL_QMCSTATE	0x04
> +/* Maximum receive buffer length (16 bits) */
> +#define QMC_GBL_MRBLR		0x06
> +/* Tx time-slot assignment table pointer (16 bits) */
> +#define QMC_GBL_TX_S_PTR	0x08
> +/* Rx pointer (16 bits) */
> +#define QMC_GBL_RXPTR		0x0A
> +/* Global receive frame threshold (16 bits) */
> +#define QMC_GBL_GRFTHR		0x0C
> +/* Global receive frame count (16 bits) */
> +#define QMC_GBL_GRFCNT		0x0E
> +/* Multichannel interrupt base address (32 bits) */
> +#define QMC_GBL_INTBASE		0x10
> +/* Multichannel interrupt pointer (32 bits) */
> +#define QMC_GBL_INTPTR		0x14
> +/* Rx time-slot assignment table pointer (16 bits) */
> +#define QMC_GBL_RX_S_PTR	0x18
> +/* Tx pointer (16 bits) */
> +#define QMC_GBL_TXPTR		0x1A
> +/* CRC constant (32 bits) */
> +#define QMC_GBL_C_MASK32	0x1C
> +/* Time slot assignment table Rx (32 x 16 bits) */
> +#define QMC_GBL_TSATRX		0x20
> +/* Time slot assignment table Tx (32 x 16 bits) */
> +#define QMC_GBL_TSATTX		0x60
> +/* CRC constant (16 bits) */
> +#define QMC_GBL_C_MASK16	0xA0
> +
> +/* TSA entry (16bit entry in TSATRX and TSATTX) */
> +#define QMC_TSA_VALID		(1 << 15)
> +#define QMC_TSA_WRAP		(1 << 14)
> +#define QMC_TSA_MASK		(0x303F)
> +#define QMC_TSA_CHANNEL(x)	((x) << 6)
> +
> +/* Tx buffer descriptor base address (16 bits, offset from MCBASE) */
> +#define QMC_SPE_TBASE	0x00
> +
> +/* Channel mode register (16 bits) */
> +#define QMC_SPE_CHAMR	0x02
> +#define   QMC_SPE_CHAMR_MODE_HDLC	(1 << 15)
> +#define   QMC_SPE_CHAMR_MODE_TRANSP	((0 << 15) | (1 << 13))
> +#define   QMC_SPE_CHAMR_ENT		(1 << 12)
> +#define   QMC_SPE_CHAMR_POL		(1 << 8)
> +#define   QMC_SPE_CHAMR_HDLC_IDLM	(1 << 13)
> +#define   QMC_SPE_CHAMR_HDLC_CRC	(1 << 7)
> +#define   QMC_SPE_CHAMR_HDLC_NOF	(0x0f << 0)
> +#define   QMC_SPE_CHAMR_TRANSP_RD	(1 << 14)
> +#define   QMC_SPE_CHAMR_TRANSP_SYNC	(1 << 10)
> +
> +/* Tx internal state (32 bits) */
> +#define QMC_SPE_TSTATE	0x04
> +/* Tx buffer descriptor pointer (16 bits) */
> +#define QMC_SPE_TBPTR	0x0C
> +/* Zero-insertion state (32 bits) */
> +#define QMC_SPE_ZISTATE	0x14
> +/* Channel’s interrupt mask flags (16 bits) */
> +#define QMC_SPE_INTMSK	0x1C
> +/* Rx buffer descriptor base address (16 bits, offset from MCBASE) */
> +#define QMC_SPE_RBASE	0x20
> +/* HDLC: Maximum frame length register (16 bits) */
> +#define QMC_SPE_MFLR	0x22
> +/* TRANSPARENT: Transparent maximum receive length (16 bits) */
> +#define QMC_SPE_TMRBLR	0x22
> +/* Rx internal state (32 bits) */
> +#define QMC_SPE_RSTATE	0x24
> +/* Rx buffer descriptor pointer (16 bits) */
> +#define QMC_SPE_RBPTR	0x2C
> +/* Packs 4 bytes to 1 long word before writing to buffer (32 bits) */
> +#define QMC_SPE_RPACK	0x30
> +/* Zero deletion state (32 bits) */
> +#define QMC_SPE_ZDSTATE	0x34
> +
> +/* Transparent synchronization (16 bits) */
> +#define QMC_SPE_TRNSYNC 0x3C
> +#define   QMC_SPE_TRNSYNC_RX(x)	((x) << 8)
> +#define   QMC_SPE_TRNSYNC_TX(x)	((x) << 0)
> +
> +/* Interrupt related registers bits */
> +#define QMC_INT_V		(1 << 15)
> +#define QMC_INT_W		(1 << 14)
> +#define QMC_INT_NID		(1 << 13)
> +#define QMC_INT_IDL		(1 << 12)
> +#define QMC_INT_GET_CHANNEL(x)	(((x) & 0x0FC0) >> 6)
> +#define QMC_INT_MRF		(1 << 5)
> +#define QMC_INT_UN		(1 << 4)
> +#define QMC_INT_RXF		(1 << 3)
> +#define QMC_INT_BSY		(1 << 2)
> +#define QMC_INT_TXB		(1 << 1)
> +#define QMC_INT_RXB		(1 << 0)
> +
> +/* BD related registers bits */
> +#define QMC_BD_RX_E	(1 << 15)
> +#define QMC_BD_RX_W	(1 << 13)
> +#define QMC_BD_RX_I	(1 << 12)
> +#define QMC_BD_RX_L	(1 << 11)
> +#define QMC_BD_RX_F	(1 << 10)
> +#define QMC_BD_RX_CM	(1 << 9)
> +#define QMC_BD_RX_UB	(1 << 7)
> +#define QMC_BD_RX_LG	(1 << 5)
> +#define QMC_BD_RX_NO	(1 << 4)
> +#define QMC_BD_RX_AB	(1 << 3)
> +#define QMC_BD_RX_CR	(1 << 2)
> +
> +#define QMC_BD_TX_R	(1 << 15)
> +#define QMC_BD_TX_W	(1 << 13)
> +#define QMC_BD_TX_I	(1 << 12)
> +#define QMC_BD_TX_L	(1 << 11)
> +#define QMC_BD_TX_TC	(1 << 10)
> +#define QMC_BD_TX_CM	(1 << 9)
> +#define QMC_BD_TX_UB	(1 << 7)
> +#define QMC_BD_TX_PAD	(0x0f << 0)
> +
> +/* Numbers of BDs and interrupt items */
> +#define QMC_NB_TXBDS	8
> +#define QMC_NB_RXBDS	8
> +#define QMC_NB_INTS	128
> +
> +struct qmc_xfer_desc {
> +	union {
> +		void (*tx_complete)(void *context);
> +		void (*rx_complete)(void *context, size_t length);
> +	};
> +	void *context;
> +};
> +
> +struct qmc_chan {
> +	struct list_head list;
> +	unsigned int id;
> +	struct qmc *qmc;
> +	void *__iomem s_param;
> +	enum qmc_mode mode;
> +	u64	tx_ts_mask;
> +	u64	rx_ts_mask;
> +	bool is_reverse_data;
> +
> +	spinlock_t	tx_lock;
> +	cbd_t __iomem *txbds;
> +	cbd_t __iomem *txbd_free;
> +	cbd_t __iomem *txbd_done;
> +	struct qmc_xfer_desc tx_desc[QMC_NB_TXBDS];
> +	u64	nb_tx_underrun;
> +	bool	is_tx_stopped;
> +
> +	spinlock_t	rx_lock;
> +	cbd_t __iomem *rxbds;
> +	cbd_t __iomem *rxbd_free;
> +	cbd_t __iomem *rxbd_done;
> +	struct qmc_xfer_desc rx_desc[QMC_NB_RXBDS];
> +	u64	nb_rx_busy;
> +	int	rx_pending;
> +	bool	is_rx_halted;
> +	bool	is_rx_stopped;
> +};
> +
> +struct qmc {
> +	struct device *dev;
> +	struct tsa_serial *tsa_serial;
> +	void *__iomem scc_regs;
> +	void *__iomem scc_pram;
> +	void *__iomem dpram;
> +	u16 scc_pram_offset;
> +	cbd_t __iomem *bd_table;
> +	dma_addr_t bd_dma_addr;
> +	size_t bd_size;
> +	u16 __iomem *int_table;
> +	u16 __iomem *int_curr;
> +	dma_addr_t int_dma_addr;
> +	size_t int_size;
> +	struct list_head chan_head;
> +	struct qmc_chan *chans[64];
> +};
> +
> +static inline void qmc_write16(void *__iomem addr, u16 val)
> +{
> +	iowrite16be(val, addr);
> +}
> +
> +static inline u16 qmc_read16(void *__iomem addr)
> +{
> +	return ioread16be(addr);
> +}
> +
> +static inline void qmc_setbits16(void *__iomem addr, u16 set)
> +{
> +	qmc_write16(addr, qmc_read16(addr) | set);
> +}
> +
> +static inline void qmc_clrbits16(void *__iomem addr, u16 clr)
> +{
> +	qmc_write16(addr, qmc_read16(addr) & ~clr);
> +}
> +
> +static inline void qmc_write32(void *__iomem addr, u32 val)
> +{
> +	iowrite32be(val, addr);
> +}
> +
> +static inline u32 qmc_read32(void *__iomem addr)
> +{
> +	return ioread32be(addr);
> +}
> +
> +static inline void qmc_setbits32(void *__iomem addr, u32 set)
> +{
> +	qmc_write32(addr, qmc_read32(addr) | set);
> +}
> +
> +
> +int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info *info)
> +{
> +	struct tsa_serial_info tsa_info;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(chan->qmc->tsa_serial, &tsa_info);
> +	if (ret)
> +		return ret;
> +
> +	info->mode = chan->mode;
> +	info->rx_fs_rate = tsa_info.rx_fs_rate;
> +	info->rx_bit_rate = tsa_info.rx_bit_rate;
> +	info->nb_tx_ts = hweight64(chan->tx_ts_mask);
> +	info->tx_fs_rate = tsa_info.tx_fs_rate;
> +	info->tx_bit_rate = tsa_info.tx_bit_rate;
> +	info->nb_rx_ts = hweight64(chan->rx_ts_mask);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_get_info);
> +
> +int qmc_chan_set_param(struct qmc_chan *chan, const struct qmc_chan_param *param)
> +{
> +	if (param->mode != chan->mode)
> +		return -EINVAL;
> +
> +	switch (param->mode) {
> +	case QMC_HDLC:
> +		if ((param->hdlc.max_rx_buf_size % 4) ||
> +		    (param->hdlc.max_rx_buf_size < 8))
> +			return -EINVAL;
> +
> +		qmc_write16(chan->qmc->scc_pram + QMC_GBL_MRBLR,
> +			    param->hdlc.max_rx_buf_size - 8);
> +		qmc_write16(chan->s_param + QMC_SPE_MFLR,
> +			    param->hdlc.max_rx_frame_size);
> +		if (param->hdlc.is_crc32) {
> +			qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> +				      QMC_SPE_CHAMR_HDLC_CRC);
> +		} else {
> +			qmc_clrbits16(chan->s_param + QMC_SPE_CHAMR,
> +				      QMC_SPE_CHAMR_HDLC_CRC);
> +		}
> +		break;
> +
> +	case QMC_TRANSPARENT:
> +		qmc_write16(chan->s_param + QMC_SPE_TMRBLR,
> +			    param->transp.max_rx_buf_size);
> +		break;
> +
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_set_param);
> +
> +int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length,
> +			  void (*complete)(void *context), void *context)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +	int ret;
> +
> +	/*
> +	 * R bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +	bd = chan->txbd_free;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	if (ctrl & (QMC_BD_TX_R | QMC_BD_TX_UB)) {
> +		/* We are full ... */
> +		ret = -EBUSY;
> +		goto end;
> +	}
> +
> +	qmc_write16(&bd->cbd_datlen, length);
> +	qmc_write32(&bd->cbd_bufaddr, addr);
> +
> +	xfer_desc = &chan->tx_desc[bd - chan->txbds];
> +	xfer_desc->tx_complete = complete;
> +	xfer_desc->context = context;
> +
> +	/* Activate the descriptor */
> +	ctrl |= (QMC_BD_TX_R | QMC_BD_TX_UB);
> +	wmb(); /* Be sure to flush the descriptor before control update */
> +	qmc_write16(&bd->cbd_sc, ctrl);
> +
> +	if (!chan->is_tx_stopped)
> +		qmc_setbits16(chan->s_param + QMC_SPE_CHAMR, QMC_SPE_CHAMR_POL);
> +
> +	if (ctrl & QMC_BD_TX_W)
> +		chan->txbd_free = chan->txbds;
> +	else
> +		chan->txbd_free++;
> +
> +	ret = 0;
> +
> +end:
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(qmc_chan_write_submit);
> +
> +static void qmc_chan_write_done(struct qmc_chan *chan)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	void (*complete)(void *context);
> +	unsigned long flags;
> +	void *context;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +
> +	/*
> +	 * R bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +	bd = chan->txbd_done;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	while (!(ctrl & QMC_BD_TX_R)) {
> +		if (!(ctrl & QMC_BD_TX_UB))
> +			goto end;
> +
> +		xfer_desc = &chan->tx_desc[bd - chan->txbds];
> +		complete = xfer_desc->tx_complete;
> +		context = xfer_desc->context;
> +		xfer_desc->tx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		qmc_write16(&bd->cbd_sc, ctrl & ~QMC_BD_TX_UB);
> +
> +		if (ctrl & QMC_BD_TX_W)
> +			chan->txbd_done = chan->txbds;
> +		else
> +			chan->txbd_done++;
> +
> +		if (complete) {
> +			spin_unlock_irqrestore(&chan->tx_lock, flags);
> +			complete(context);
> +			spin_lock_irqsave(&chan->tx_lock, flags);
> +		}
> +
> +		bd = chan->txbd_done;
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +	}
> +
> +end:
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +}
> +
> +int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length,
> +			 void (*complete)(void *context, size_t length), void *context)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +	int ret;
> +
> +	/*
> +	 * E bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +	bd = chan->rxbd_free;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	if (ctrl & (QMC_BD_RX_E | QMC_BD_RX_UB)) {
> +		/* We are full ... */
> +		ret = -EBUSY;
> +		goto end;
> +	}
> +
> +	qmc_write16(&bd->cbd_datlen, 0); /* data length is updated by the QMC */
> +	qmc_write32(&bd->cbd_bufaddr, addr);
> +
> +	xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> +	xfer_desc->rx_complete = complete;
> +	xfer_desc->context = context;
> +
> +	/* Activate the descriptor */
> +	ctrl |= (QMC_BD_RX_E | QMC_BD_RX_UB);
> +	wmb(); /* Be sure to flush data before descriptor activation */
> +	qmc_write16(&bd->cbd_sc, ctrl);
> +
> +	/* Restart receiver if needed */
> +	if (chan->is_rx_halted && !chan->is_rx_stopped) {
> +		/* Restart receiver */
> +		if (chan->mode == QMC_TRANSPARENT)
> +			qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x18000080);
> +		else
> +			qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x00000080);
> +		qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> +		chan->is_rx_halted = false;
> +	}
> +	chan->rx_pending++;
> +
> +	if (ctrl & QMC_BD_RX_W)
> +		chan->rxbd_free = chan->rxbds;
> +	else
> +		chan->rxbd_free++;
> +
> +	ret = 0;
> +end:
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(qmc_chan_read_submit);
> +
> +static void qmc_chan_read_done(struct qmc_chan *chan)
> +{
> +	void (*complete)(void *context, size_t size);
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	void *context;
> +	u16 datalen;
> +	u16 ctrl;
> +
> +	/*
> +	 * E bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +	bd = chan->rxbd_done;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	while (!(ctrl & QMC_BD_RX_E)) {
> +		if (!(ctrl & QMC_BD_RX_UB))
> +			goto end;
> +
> +		xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> +		complete = xfer_desc->rx_complete;
> +		context = xfer_desc->context;
> +		xfer_desc->rx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		datalen = qmc_read16(&bd->cbd_datlen);
> +		qmc_write16(&bd->cbd_sc, ctrl & ~QMC_BD_RX_UB);
> +
> +		if (ctrl & QMC_BD_RX_W)
> +			chan->rxbd_done = chan->rxbds;
> +		else
> +			chan->rxbd_done++;
> +
> +		chan->rx_pending--;
> +
> +		if (complete) {
> +			spin_unlock_irqrestore(&chan->rx_lock, flags);
> +			complete(context, datalen);
> +			spin_lock_irqsave(&chan->rx_lock, flags);
> +		}
> +
> +		bd = chan->rxbd_done;
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +	}
> +
> +end:
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +}
> +
> +static int qmc_chan_command(struct qmc_chan *chan, u8 qmc_opcode)
> +{
> +	return cpm_command(chan->id << 2, (qmc_opcode << 4) | 0x0E);
> +}
> +
> +static int qmc_chan_stop_rx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +	int ret;
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +
> +	/* Send STOP RECEIVE command */
> +	ret = qmc_chan_command(chan, 0x0);
> +	if (ret) {
> +		dev_err(chan->qmc->dev, "chan %u: Send STOP RECEIVE failed (%d)\n",
> +			chan->id, ret);
> +		goto end;
> +	}
> +
> +	chan->is_rx_stopped = true;
> +
> +end:
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +	return ret;
> +}
> +
> +static int qmc_chan_stop_tx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +	int ret;
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +
> +	/* Send STOP TRANSMIT command */
> +	ret = qmc_chan_command(chan, 0x1);
> +	if (ret) {
> +		dev_err(chan->qmc->dev, "chan %u: Send STOP TRANSMIT failed (%d)\n",
> +			chan->id, ret);
> +		goto end;
> +	}
> +
> +	chan->is_tx_stopped = true;
> +
> +end:
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +	return ret;
> +}
> +
> +int qmc_chan_stop(struct qmc_chan *chan, int direction)
> +{
> +	int ret;
> +
> +	if (direction & QMC_CHAN_READ) {
> +		ret = qmc_chan_stop_rx(chan);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (direction & QMC_CHAN_WRITE) {
> +		ret = qmc_chan_stop_tx(chan);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_stop);
> +
> +static void qmc_chan_start_rx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +
> +	/* Restart the receiver */
> +	if (chan->mode == QMC_TRANSPARENT)
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x18000080);
> +	else
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x00000080);
> +	qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> +	chan->is_rx_halted = false;
> +
> +	chan->is_rx_stopped = false;
> +
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +}
> +
> +static void qmc_chan_start_tx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +
> +	/*
> +	 * Enable channel transmitter as it could be disabled if
> +	 * qmc_chan_reset() was called.
> +	 */
> +	qmc_setbits16(chan->s_param + QMC_SPE_CHAMR, QMC_SPE_CHAMR_ENT);
> +
> +	/* Set the POL bit in the channel mode register */
> +	qmc_setbits16(chan->s_param + QMC_SPE_CHAMR, QMC_SPE_CHAMR_POL);
> +
> +	chan->is_tx_stopped = false;
> +
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +}
> +
> +int qmc_chan_start(struct qmc_chan *chan, int direction)
> +{
> +	if (direction & QMC_CHAN_READ)
> +		qmc_chan_start_rx(chan);
> +
> +	if (direction & QMC_CHAN_WRITE)
> +		qmc_chan_start_tx(chan);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_start);
> +
> +static void qmc_chan_reset_rx(struct qmc_chan *chan)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +	bd = chan->rxbds;
> +	do {
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +		qmc_write16(&bd->cbd_sc, ctrl & ~(QMC_BD_RX_UB | QMC_BD_RX_E));
> +
> +		xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> +		xfer_desc->rx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		bd++;
> +	} while (!(ctrl & QMC_BD_RX_W));
> +
> +	chan->rxbd_free = chan->rxbds;
> +	chan->rxbd_done = chan->rxbds;
> +	qmc_write16(chan->s_param + QMC_SPE_RBPTR,
> +		    qmc_read16(chan->s_param + QMC_SPE_RBASE));
> +
> +	chan->rx_pending = 0;
> +	chan->is_rx_stopped = false;
> +
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +}
> +
> +static void qmc_chan_reset_tx(struct qmc_chan *chan)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +
> +	/* Disable transmitter. It will be re-enable on qmc_chan_start() */
> +	qmc_clrbits16(chan->s_param + QMC_SPE_CHAMR, QMC_SPE_CHAMR_ENT);
> +
> +	bd = chan->txbds;
> +	do {
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +		qmc_write16(&bd->cbd_sc, ctrl & ~(QMC_BD_TX_UB | QMC_BD_TX_R));
> +
> +		xfer_desc = &chan->tx_desc[bd - chan->txbds];
> +		xfer_desc->tx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		bd++;
> +	} while (!(ctrl & QMC_BD_TX_W));
> +
> +	chan->txbd_free = chan->txbds;
> +	chan->txbd_done = chan->txbds;
> +	qmc_write16(chan->s_param + QMC_SPE_TBPTR,
> +		    qmc_read16(chan->s_param + QMC_SPE_TBASE));
> +
> +	/* Reset TSTATE and ZISTATE to their initial value */
> +	qmc_write32(chan->s_param + QMC_SPE_TSTATE, 0x30000000);
> +	qmc_write32(chan->s_param + QMC_SPE_ZISTATE, 0x00000100);
> +
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +}
> +
> +int qmc_chan_reset(struct qmc_chan *chan, int direction)
> +{
> +	if (direction & QMC_CHAN_READ)
> +		qmc_chan_reset_rx(chan);
> +
> +	if (direction & QMC_CHAN_WRITE)
> +		qmc_chan_reset_tx(chan);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_reset);
> +
> +static int qmc_check_chans(struct qmc *qmc)
> +{
> +	struct tsa_serial_info info;
> +	bool is_one_table = false;
> +	struct qmc_chan *chan;
> +	u64 tx_ts_mask = 0;
> +	u64 rx_ts_mask = 0;
> +	u64 tx_ts_assigned_mask;
> +	u64 rx_ts_assigned_mask;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(qmc->tsa_serial, &info);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * If more than 32 TS are assigned to this serial, one common table is
> +	 * used for Tx and Rx and so masks must be equal for all channels.
> +	 */
> +	if ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) {
> +		if (info.nb_tx_ts != info.nb_rx_ts) {
> +			dev_err(qmc->dev, "Number of TSA Tx/Rx TS assigned are not equal\n");
> +			return -EINVAL;
> +		}
> +		is_one_table = true;
> +	}
> +
> +
> +	tx_ts_assigned_mask = (((u64)1) << info.nb_tx_ts) - 1;
> +	rx_ts_assigned_mask = (((u64)1) << info.nb_rx_ts) - 1;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		if (chan->tx_ts_mask > tx_ts_assigned_mask) {
> +			dev_err(qmc->dev, "chan %u uses TSA unassigned Tx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +		if (tx_ts_mask & chan->tx_ts_mask) {
> +			dev_err(qmc->dev, "chan %u uses an already used Tx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +
> +		if (chan->rx_ts_mask > rx_ts_assigned_mask) {
> +			dev_err(qmc->dev, "chan %u uses TSA unassigned Rx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +		if (rx_ts_mask & chan->rx_ts_mask) {
> +			dev_err(qmc->dev, "chan %u uses an already used Rx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +
> +		if (is_one_table && (chan->tx_ts_mask != chan->rx_ts_mask)) {
> +			dev_err(qmc->dev, "chan %u uses different Rx and Tx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +
> +		tx_ts_mask |= chan->tx_ts_mask;
> +		rx_ts_mask |= chan->rx_ts_mask;
> +	}
> +
> +	return 0;
> +}
> +
> +static unsigned int qmc_nb_chans(struct qmc *qmc)
> +{
> +	unsigned int count = 0;
> +	struct qmc_chan *chan;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list)
> +		count++;
> +
> +	return count;
> +}
> +
> +static int qmc_of_parse_chans(struct qmc *qmc, struct device_node *np)
> +{
> +	struct device_node *chan_np;
> +	struct qmc_chan *chan;
> +	const char *mode;
> +	u32 chan_id;
> +	u64 ts_mask;
> +	int ret;
> +
> +	for_each_available_child_of_node(np, chan_np) {
> +		ret = of_property_read_u32(chan_np, "reg", &chan_id);
> +		if (ret) {
> +			dev_err(qmc->dev, "%pOF: failed to read reg\n", chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		if (chan_id > 63) {
> +			dev_err(qmc->dev, "%pOF: Invalid chan_id\n", chan_np);
> +			of_node_put(chan_np);
> +			return -EINVAL;
> +		}
> +
> +		chan = devm_kzalloc(qmc->dev, sizeof(*chan), GFP_KERNEL);
> +		if (!chan) {
> +			of_node_put(chan_np);
> +			return -ENOMEM;
> +		}
> +
> +		chan->id = chan_id;
> +		spin_lock_init(&chan->rx_lock);
> +		spin_lock_init(&chan->tx_lock);
> +
> +		ret = of_property_read_u64(chan_np, "fsl,tx-ts-mask", &ts_mask);
> +		if (ret) {
> +			dev_err(qmc->dev, "%pOF: failed to read fsl,tx-ts-mask\n",
> +				chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		chan->tx_ts_mask = ts_mask;
> +
> +		ret = of_property_read_u64(chan_np, "fsl,rx-ts-mask", &ts_mask);
> +		if (ret) {
> +			dev_err(qmc->dev, "%pOF: failed to read fsl,rx-ts-mask\n",
> +				chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		chan->rx_ts_mask = ts_mask;
> +
> +		mode = "transparent";
> +		ret = of_property_read_string(chan_np, "fsl,operational-mode", &mode);
> +		if (ret && ret != -EINVAL) {
> +			dev_err(qmc->dev, "%pOF: failed to read fsl,operational-mode\n",
> +				chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		if (!strcmp(mode, "transparent")) {
> +			chan->mode = QMC_TRANSPARENT;
> +		} else if (!strcmp(mode, "hdlc")) {
> +			chan->mode = QMC_HDLC;
> +		} else {
> +			dev_err(qmc->dev, "%pOF: Invalid fsl,operational-mode (%s)\n",
> +				chan_np, mode);
> +			of_node_put(chan_np);
> +			return -EINVAL;
> +		}
> +
> +		chan->is_reverse_data = of_property_read_bool(chan_np,
> +							      "fsl,reverse-data");
> +
> +		list_add_tail(&chan->list, &qmc->chan_head);
> +		qmc->chans[chan->id] = chan;
> +	}
> +
> +	return qmc_check_chans(qmc);
> +}
> +
> +static int qmc_setup_tsa_64rxtx(struct qmc *qmc, const struct tsa_serial_info *info)
> +{
> +	struct qmc_chan *chan;
> +	unsigned int i;
> +	u16 val;
> +
> +	/*
> +	 * Use a common Tx/Rx 64 entries table.
> +	 * Everything was previously checked, Tx and Rx related stuffs are
> +	 * identical -> Used Rx related stuff to build the table
> +	 */
> +
> +	/* Invalidate all entries */
> +	for (i = 0; i < 64; i++)
> +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), 0x0000);
> +
> +	/* Set entries based on Rx stuff*/
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		for (i = 0; i < info->nb_rx_ts; i++) {
> +			if (!(chan->rx_ts_mask & (((u64)1) << i)))
> +				continue;
> +
> +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> +			      QMC_TSA_CHANNEL(chan->id);
> +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), val);
> +		}
> +	}
> +
> +	/* Set Wrap bit on last entry */
> +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info->nb_rx_ts - 1) * 2),
> +		      QMC_TSA_WRAP);
> +
> +	/* Init pointers to the table */
> +	val = qmc->scc_pram_offset + QMC_GBL_TSATRX;
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RXPTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TXPTR, val);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_tsa_32rx_32tx(struct qmc *qmc, const struct tsa_serial_info *info)
> +{
> +	struct qmc_chan *chan;
> +	unsigned int i;
> +	u16 val;
> +
> +	/*
> +	 * Use a Tx 32 entries table and a Rx 32 entries table.
> +	 * Everything was previously checked.
> +	 */
> +
> +	/* Invalidate all entries */
> +	for (i = 0; i < 32; i++) {
> +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), 0x0000);
> +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX + (i * 2), 0x0000);
> +	}
> +
> +	/* Set entries based on Rx and Tx stuff*/
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		/* Rx part */
> +		for (i = 0; i < info->nb_rx_ts; i++) {
> +			if (!(chan->rx_ts_mask & (((u64)1) << i)))
> +				continue;
> +
> +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> +			      QMC_TSA_CHANNEL(chan->id);
> +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), val);
> +		}
> +		/* Tx part */
> +		for (i = 0; i < info->nb_tx_ts; i++) {
> +			if (!(chan->tx_ts_mask & (((u64)1) << i)))
> +				continue;
> +
> +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> +			      QMC_TSA_CHANNEL(chan->id);
> +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX + (i * 2), val);
> +		}
> +	}
> +
> +	/* Set Wrap bit on last entries */
> +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info->nb_rx_ts - 1) * 2),
> +		      QMC_TSA_WRAP);
> +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATTX + ((info->nb_tx_ts - 1) * 2),
> +		      QMC_TSA_WRAP);
> +
> +	/* Init Rx pointers ...*/
> +	val = qmc->scc_pram_offset + QMC_GBL_TSATRX;
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RXPTR, val);
> +
> +	/* ... and Tx pointers */
> +	val = qmc->scc_pram_offset + QMC_GBL_TSATTX;
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TXPTR, val);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_tsa(struct qmc *qmc)
> +{
> +	struct tsa_serial_info info;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(qmc->tsa_serial, &info);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * Setup one common 64 entries table or two 32 entries (one for Tx and
> +	 * one for Tx) according to assigned TS numbers.
> +	 */
> +	return ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) ?
> +		qmc_setup_tsa_64rxtx(qmc, &info) :
> +		qmc_setup_tsa_32rx_32tx(qmc, &info);
> +}
> +
> +static int qmc_setup_chan_trnsync(struct qmc *qmc, struct qmc_chan *chan)
> +{
> +	struct tsa_serial_info info;
> +	u16 first_rx, last_tx;
> +	u16 trnsync;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(chan->qmc->tsa_serial, &info);
> +	if (ret)
> +		return ret;
> +
> +	/* Find the first Rx TS allocated to the channel */
> +	first_rx = chan->rx_ts_mask ? __ffs64(chan->rx_ts_mask) + 1 : 0;
> +
> +	/* Find the last Tx TS allocated to the channel */
> +	last_tx = fls64(chan->tx_ts_mask);
> +
> +	trnsync = 0;
> +	if (info.nb_rx_ts)
> +		trnsync |= QMC_SPE_TRNSYNC_RX((first_rx % info.nb_rx_ts) * 2);
> +	if (info.nb_tx_ts)
> +		trnsync |= QMC_SPE_TRNSYNC_TX((last_tx % info.nb_tx_ts) * 2);
> +
> +	qmc_write16(chan->s_param + QMC_SPE_TRNSYNC, trnsync);
> +
> +	dev_dbg(qmc->dev, "chan %u: trnsync=0x%04x, rx %u/%u 0x%llx, tx %u/%u 0x%llx\n",
> +		chan->id, trnsync,
> +		first_rx, info.nb_rx_ts, chan->rx_ts_mask,
> +		last_tx, info.nb_tx_ts, chan->tx_ts_mask);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_chan(struct qmc *qmc, struct qmc_chan *chan)
> +{
> +	unsigned int i;
> +	cbd_t __iomem *bd;
> +	int ret;
> +	u16 val;
> +
> +	chan->qmc = qmc;
> +
> +	/* Set channel specific parameter base address */
> +	chan->s_param = qmc->dpram + (chan->id * 64);
> +	/* 16 bd per channel (8 rx and 8 tx) */
> +	chan->txbds = qmc->bd_table + (chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS));
> +	chan->rxbds = qmc->bd_table + (chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS)) + QMC_NB_TXBDS;
> +
> +	chan->txbd_free = chan->txbds;
> +	chan->txbd_done = chan->txbds;
> +	chan->rxbd_free = chan->rxbds;
> +	chan->rxbd_done = chan->rxbds;
> +
> +	/* TBASE and TBPTR*/
> +	val = chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS) * sizeof(cbd_t);
> +	qmc_write16(chan->s_param + QMC_SPE_TBASE, val);
> +	qmc_write16(chan->s_param + QMC_SPE_TBPTR, val);
> +
> +	/* RBASE and RBPTR*/
> +	val = ((chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS)) + QMC_NB_TXBDS) * sizeof(cbd_t);
> +	qmc_write16(chan->s_param + QMC_SPE_RBASE, val);
> +	qmc_write16(chan->s_param + QMC_SPE_RBPTR, val);
> +	qmc_write32(chan->s_param + QMC_SPE_TSTATE, 0x30000000);
> +	qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> +	qmc_write32(chan->s_param + QMC_SPE_ZISTATE, 0x00000100);
> +	if (chan->mode == QMC_TRANSPARENT) {
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x18000080);
> +		qmc_write16(chan->s_param + QMC_SPE_TMRBLR, 60);
> +		val = QMC_SPE_CHAMR_MODE_TRANSP | QMC_SPE_CHAMR_TRANSP_SYNC;
> +		if (chan->is_reverse_data)
> +			val |= QMC_SPE_CHAMR_TRANSP_RD;
> +		qmc_write16(chan->s_param + QMC_SPE_CHAMR, val);
> +		ret = qmc_setup_chan_trnsync(qmc, chan);
> +		if (ret)
> +			return ret;
> +	} else {
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x00000080);
> +		qmc_write16(chan->s_param + QMC_SPE_MFLR, 60);
> +		qmc_write16(chan->s_param + QMC_SPE_CHAMR,
> +			QMC_SPE_CHAMR_MODE_HDLC | QMC_SPE_CHAMR_HDLC_IDLM);
> +	}
> +
> +	/* Do not enable interrupts now. They will be enabled later */
> +	qmc_write16(chan->s_param + QMC_SPE_INTMSK, 0x0000);
> +
> +	/* Init Rx BDs and set Wrap bit on last descriptor */
> +	BUILD_BUG_ON(QMC_NB_RXBDS == 0);
> +	val = QMC_BD_RX_I;
> +	for (i = 0; i < QMC_NB_RXBDS; i++) {
> +		bd = chan->rxbds + i;
> +		qmc_write16(&bd->cbd_sc, val);
> +	}
> +	bd = chan->rxbds + QMC_NB_RXBDS - 1;
> +	qmc_write16(&bd->cbd_sc, val | QMC_BD_RX_W);
> +
> +	/* Init Tx BDs and set Wrap bit on last descriptor */
> +	BUILD_BUG_ON(QMC_NB_TXBDS == 0);
> +	val = QMC_BD_TX_I;
> +	if (chan->mode == QMC_HDLC)
> +		val |= QMC_BD_TX_L | QMC_BD_TX_TC;
> +	for (i = 0; i < QMC_NB_TXBDS; i++) {
> +		bd = chan->txbds + i;
> +		qmc_write16(&bd->cbd_sc, val);
> +	}
> +	bd = chan->txbds + QMC_NB_TXBDS - 1;
> +	qmc_write16(&bd->cbd_sc, val | QMC_BD_TX_W);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_chans(struct qmc *qmc)
> +{
> +	struct qmc_chan *chan;
> +	int ret;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		ret = qmc_setup_chan(qmc, chan);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int qmc_finalize_chans(struct qmc *qmc)
> +{
> +	struct qmc_chan *chan;
> +	int ret;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		/* Unmask channel interrupts */
> +		if (chan->mode == QMC_HDLC) {
> +			qmc_write16(chan->s_param + QMC_SPE_INTMSK,
> +				    QMC_INT_NID | QMC_INT_IDL | QMC_INT_MRF |
> +				    QMC_INT_UN | QMC_INT_RXF | QMC_INT_BSY |
> +				    QMC_INT_TXB | QMC_INT_RXB);
> +		} else {
> +			qmc_write16(chan->s_param + QMC_SPE_INTMSK,
> +				    QMC_INT_UN | QMC_INT_BSY |
> +				    QMC_INT_TXB | QMC_INT_RXB);
> +		}
> +
> +		/* Forced stop the channel */
> +		ret = qmc_chan_stop(chan, QMC_CHAN_ALL);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_ints(struct qmc *qmc)
> +{
> +	unsigned int i;
> +	u16 __iomem *last;
> +
> +	/* Raz all entries */
> +	for (i = 0; i < (qmc->int_size / sizeof(u16)); i++)
> +		qmc_write16(qmc->int_table + i, 0x0000);
> +
> +	/* Set Wrap bit on last entry */
> +	if (qmc->int_size >= sizeof(u16)) {
> +		last = qmc->int_table + (qmc->int_size / sizeof(u16)) - 1;
> +		qmc_write16(last, QMC_INT_W);
> +	}
> +
> +	return 0;
> +}
> +
> +static void qmc_irq_gint(struct qmc *qmc)
> +{
> +	struct qmc_chan *chan;
> +	unsigned int chan_id;
> +	unsigned long flags;
> +	u16 int_entry;
> +
> +	int_entry = qmc_read16(qmc->int_curr);
> +	while (int_entry & QMC_INT_V) {
> +		/* Clear all but the Wrap bit */
> +		qmc_write16(qmc->int_curr, int_entry & QMC_INT_W);
> +
> +		chan_id = QMC_INT_GET_CHANNEL(int_entry);
> +		chan = qmc->chans[chan_id];
> +		if (!chan) {
> +			dev_err(qmc->dev, "interrupt on invalid chan %u\n", chan_id);
> +			goto int_next;
> +		}
> +
> +		if (int_entry & QMC_INT_TXB)
> +			qmc_chan_write_done(chan);
> +
> +		if (int_entry & QMC_INT_UN) {
> +			dev_info(qmc->dev, "intr chan %u, 0x%04x (UN)\n", chan_id,
> +				 int_entry);
> +			chan->nb_tx_underrun++;
> +		}
> +
> +		if (int_entry & QMC_INT_BSY) {
> +			dev_info(qmc->dev, "intr chan %u, 0x%04x (BSY)\n", chan_id,
> +				 int_entry);
> +			chan->nb_rx_busy++;
> +			/* Restart the receiver if needed */
> +			spin_lock_irqsave(&chan->rx_lock, flags);
> +			if (chan->rx_pending && !chan->is_rx_stopped) {
> +				if (chan->mode == QMC_TRANSPARENT)
> +					qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x18000080);
> +				else
> +					qmc_write32(chan->s_param + QMC_SPE_ZDSTATE, 0x00000080);
> +				qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> +				chan->is_rx_halted = false;
> +			} else {
> +				chan->is_rx_halted = true;
> +			}
> +			spin_unlock_irqrestore(&chan->rx_lock, flags);
> +		}
> +
> +		if (int_entry & QMC_INT_RXB)
> +			qmc_chan_read_done(chan);
> +
> +int_next:
> +		if (int_entry & QMC_INT_W)
> +			qmc->int_curr = qmc->int_table;
> +		else
> +			qmc->int_curr++;
> +		int_entry = qmc_read16(qmc->int_curr);
> +	}
> +}
> +
> +static irqreturn_t qmc_irq_handler(int irq, void *priv)
> +{
> +	struct qmc *qmc = (struct qmc *)priv;
> +	u16 scce;
> +
> +	scce = qmc_read16(qmc->scc_regs + SCC_SCCE);
> +	qmc_write16(qmc->scc_regs + SCC_SCCE, scce);
> +
> +	if (unlikely(scce & SCC_SCCE_IQOV))
> +		dev_info(qmc->dev, "IRQ queue overflow\n");
> +
> +	if (unlikely(scce & SCC_SCCE_GUN))
> +		dev_err(qmc->dev, "Global transmitter underrun\n");
> +
> +	if (unlikely(scce & SCC_SCCE_GOV))
> +		dev_err(qmc->dev, "Global receiver overrun\n");
> +
> +	/* normal interrupt */
> +	if (likely(scce & SCC_SCCE_GINT))
> +		qmc_irq_gint(qmc);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static int qmc_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	unsigned int nb_chans;
> +	struct resource *res;
> +	struct qmc *qmc;
> +	int irq;
> +	int ret;
> +
> +	qmc = devm_kzalloc(&pdev->dev, sizeof(*qmc), GFP_KERNEL);
> +	if (!qmc)
> +		return -ENOMEM;
> +
> +	qmc->dev = &pdev->dev;
> +	INIT_LIST_HEAD(&qmc->chan_head);
> +
> +	qmc->scc_regs = devm_platform_ioremap_resource_byname(pdev, "scc_regs");
> +	if (IS_ERR(qmc->scc_regs))
> +		return PTR_ERR(qmc->scc_regs);
> +
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "scc_pram");
> +	if (!res)
> +		return -EINVAL;
> +	qmc->scc_pram_offset = res->start - get_immrbase();
> +	qmc->scc_pram = devm_ioremap_resource(qmc->dev, res);
> +	if (IS_ERR(qmc->scc_pram))
> +		return PTR_ERR(qmc->scc_pram);
> +
> +	qmc->dpram  = devm_platform_ioremap_resource_byname(pdev, "dpram");
> +	if (IS_ERR(qmc->dpram))
> +		return PTR_ERR(qmc->dpram);
> +
> +	qmc->tsa_serial = devm_tsa_serial_get_byphandle(qmc->dev, np, "fsl,tsa-serial");
> +	if (IS_ERR(qmc->tsa_serial)) {
> +		return dev_err_probe(qmc->dev, PTR_ERR(qmc->tsa_serial),
> +				     "Failed to get TSA serial\n");
> +	}
> +
> +	/* Connect the serial (SCC) to TSA */
> +	ret = tsa_serial_connect(qmc->tsa_serial);
> +	if (ret) {
> +		dev_err(qmc->dev, "Failed to connect TSA serial\n");
> +		return ret;
> +	}
> +
> +	/* Parse channels informationss */
> +	ret = qmc_of_parse_chans(qmc, np);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	nb_chans = qmc_nb_chans(qmc);
> +
> +	/* Init GMSR_H and GMSR_L registers */
> +	qmc_write32(qmc->scc_regs + SCC_GSMRH,
> +		    SCC_GSMRH_CDS | SCC_GSMRH_CTSS | SCC_GSMRH_CDP | SCC_GSMRH_CTSP);
> +
> +	/* enable QMC mode */
> +	qmc_write32(qmc->scc_regs + SCC_GSMRL, SCC_GSMRL_MODE_QMC);
> +
> +	/*
> +	 * Allocate the buffer descriptor table
> +	 * 8 rx and 8 tx descriptors per channel
> +	 */
> +	qmc->bd_size = (nb_chans * (QMC_NB_TXBDS + QMC_NB_RXBDS)) * sizeof(cbd_t);
> +	qmc->bd_table = dmam_alloc_coherent(qmc->dev, qmc->bd_size,
> +		&qmc->bd_dma_addr, GFP_KERNEL);
> +	if (!qmc->bd_table) {
> +		dev_err(qmc->dev, "Failed to allocate bd table\n");
> +		ret = -ENOMEM;
> +		goto err_tsa_serial_disconnect;
> +	}
> +	memset(qmc->bd_table, 0, qmc->bd_size);
> +
> +	qmc_write32(qmc->scc_pram + QMC_GBL_MCBASE, qmc->bd_dma_addr);
> +
> +	/* Allocate the interrupt table */
> +	qmc->int_size = QMC_NB_INTS * sizeof(u16);
> +	qmc->int_table = dmam_alloc_coherent(qmc->dev, qmc->int_size,
> +		&qmc->int_dma_addr, GFP_KERNEL);
> +	if (!qmc->int_table) {
> +		dev_err(qmc->dev, "Failed to allocate interrupt table\n");
> +		ret = -ENOMEM;
> +		goto err_tsa_serial_disconnect;
> +	}
> +	memset(qmc->int_table, 0, qmc->int_size);
> +
> +	qmc->int_curr = qmc->int_table;
> +	qmc_write32(qmc->scc_pram + QMC_GBL_INTBASE, qmc->int_dma_addr);
> +	qmc_write32(qmc->scc_pram + QMC_GBL_INTPTR, qmc->int_dma_addr);
> +
> +	/* Set MRBLR (valid for HDLC only) max MRU + max CRC */
> +	qmc_write16(qmc->scc_pram + QMC_GBL_MRBLR, HDLC_MAX_MRU + 4);
> +
> +	qmc_write16(qmc->scc_pram + QMC_GBL_GRFTHR, 1);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_GRFCNT, 1);
> +
> +	qmc_write32(qmc->scc_pram + QMC_GBL_C_MASK32, 0xDEBB20E3);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_C_MASK16, 0xF0B8);
> +
> +	ret = qmc_setup_tsa(qmc);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	qmc_write16(qmc->scc_pram + QMC_GBL_QMCSTATE, 0x8000);
> +
> +	ret = qmc_setup_chans(qmc);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	/* Init interrupts table */
> +	ret = qmc_setup_ints(qmc);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	/* Disable and clear interrupts,  set the irq handler */
> +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0x0000);
> +	qmc_write16(qmc->scc_regs + SCC_SCCE, 0x000F);
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0)
> +		goto err_tsa_serial_disconnect;
> +	ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc", qmc);
> +	if (ret < 0)
> +		goto err_tsa_serial_disconnect;
> +
> +	/* Enable interrupts */
> +	qmc_write16(qmc->scc_regs + SCC_SCCM,
> +		SCC_SCCE_IQOV | SCC_SCCE_GINT | SCC_SCCE_GUN | SCC_SCCE_GOV);
> +
> +	ret = qmc_finalize_chans(qmc);
> +	if (ret < 0)
> +		goto err_disable_intr;
> +
> +	/* Enable transmiter and receiver */
> +	qmc_setbits32(qmc->scc_regs + SCC_GSMRL, SCC_GSMRL_ENR | SCC_GSMRL_ENT);
> +
> +	platform_set_drvdata(pdev, qmc);
> +
> +	return 0;
> +
> +err_disable_intr:
> +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0);
> +
> +err_tsa_serial_disconnect:
> +	tsa_serial_disconnect(qmc->tsa_serial);
> +	return ret;
> +}
> +
> +static int qmc_remove(struct platform_device *pdev)
> +{
> +	struct qmc *qmc = platform_get_drvdata(pdev);
> +
> +	/* Disable transmiter and receiver */
> +	qmc_setbits32(qmc->scc_regs + SCC_GSMRL, 0);
> +
> +	/* Disable interrupts */
> +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0);
> +
> +	/* Disconnect the serial from TSA */
> +	tsa_serial_disconnect(qmc->tsa_serial);
> +
> +	return 0;
> +}
> +
> +static const struct of_device_id qmc_id_table[] = {
> +	{ .compatible = "fsl,cpm1-scc-qmc" },
> +	{} /* sentinel */
> +};
> +MODULE_DEVICE_TABLE(of, qmc_id_table);
> +
> +static struct platform_driver qmc_driver = {
> +	.driver = {
> +		.name = "fsl-qmc",
> +		.of_match_table = of_match_ptr(qmc_id_table),
> +	},
> +	.probe = qmc_probe,
> +	.remove = qmc_remove,
> +};
> +module_platform_driver(qmc_driver);
> +
> +struct qmc_chan *qmc_chan_get_byphandle(struct device_node *np, const char *phandle_name)
> +{
> +	struct of_phandle_args out_args;
> +	struct platform_device *pdev;
> +	struct qmc_chan *qmc_chan;
> +	struct qmc *qmc;
> +	int ret;
> +
> +	ret = of_parse_phandle_with_fixed_args(np, phandle_name, 1, 0,
> +					       &out_args);
> +	if (ret < 0)
> +		return ERR_PTR(ret);
> +
> +	if (!of_match_node(qmc_driver.driver.of_match_table, out_args.np)) {
> +		of_node_put(out_args.np);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	pdev = of_find_device_by_node(out_args.np);
> +	of_node_put(out_args.np);
> +	if (!pdev)
> +		return ERR_PTR(-ENODEV);
> +
> +	qmc = platform_get_drvdata(pdev);
> +	if (!qmc) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EPROBE_DEFER);
> +	}
> +
> +	if (out_args.args_count != 1) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	if (out_args.args[0] >= ARRAY_SIZE(qmc->chans)) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	qmc_chan = qmc->chans[out_args.args[0]];
> +	if (!qmc_chan) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-ENOENT);
> +	}
> +
> +	return qmc_chan;
> +}
> +EXPORT_SYMBOL(qmc_chan_get_byphandle);
> +
> +void qmc_chan_put(struct qmc_chan *chan)
> +{
> +	put_device(chan->qmc->dev);
> +}
> +EXPORT_SYMBOL(qmc_chan_put);
> +
> +static void devm_qmc_chan_release(struct device *dev, void *res)
> +{
> +	struct qmc_chan **qmc_chan = res;
> +
> +	qmc_chan_put(*qmc_chan);
> +}
> +
> +struct qmc_chan *devm_qmc_chan_get_byphandle(struct device *dev,
> +					     struct device_node *np,
> +					     const char *phandle_name)
> +{
> +	struct qmc_chan *qmc_chan;
> +	struct qmc_chan **dr;
> +
> +	dr = devres_alloc(devm_qmc_chan_release, sizeof(*dr), GFP_KERNEL);
> +	if (!dr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	qmc_chan = qmc_chan_get_byphandle(np, phandle_name);
> +	if (!IS_ERR(qmc_chan)) {
> +		*dr = qmc_chan;
> +		devres_add(dev, dr);
> +	} else {
> +		devres_free(dr);
> +	}
> +
> +	return qmc_chan;
> +}
> +EXPORT_SYMBOL(devm_qmc_chan_get_byphandle);
> +
> +MODULE_AUTHOR("Herve Codina <herve.codina@bootlin.com>");
> +MODULE_DESCRIPTION("CPM QMC driver");
> +MODULE_LICENSE("GPL");
> diff --git a/include/soc/fsl/qe/qmc.h b/include/soc/fsl/qe/qmc.h
> new file mode 100644
> index 000000000000..3c61a50d2ae2
> --- /dev/null
> +++ b/include/soc/fsl/qe/qmc.h
> @@ -0,0 +1,71 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * QMC management
> + *
> + * Copyright 2022 CS GROUP France
> + *
> + * Author: Herve Codina <herve.codina@bootlin.com>
> + */
> +#ifndef __SOC_FSL_QMC_H__
> +#define __SOC_FSL_QMC_H__
> +
> +#include <linux/types.h>
> +
> +struct device_node;
> +struct device;
> +struct qmc_chan;
> +
> +struct qmc_chan *qmc_chan_get_byphandle(struct device_node *np, const char *phandle_name);
> +void qmc_chan_put(struct qmc_chan *chan);
> +struct qmc_chan *devm_qmc_chan_get_byphandle(struct device *dev, struct device_node *np,
> +					     const char *phandle_name);
> +
> +enum qmc_mode {
> +	QMC_TRANSPARENT,
> +	QMC_HDLC,
> +};
> +
> +struct qmc_chan_info {
> +	enum qmc_mode mode;
> +	unsigned long rx_fs_rate;
> +	unsigned long rx_bit_rate;
> +	u8 nb_rx_ts;
> +	unsigned long tx_fs_rate;
> +	unsigned long tx_bit_rate;
> +	u8 nb_tx_ts;
> +};
> +
> +int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info *info);
> +
> +struct qmc_chan_param {
> +	enum qmc_mode mode;
> +	union {
> +		struct {
> +			u16 max_rx_buf_size;
> +			u16 max_rx_frame_size;
> +			bool is_crc32;
> +		} hdlc;
> +		struct {
> +			u16 max_rx_buf_size;
> +		} transp;
> +	};
> +};
> +
> +int qmc_chan_set_param(struct qmc_chan *chan, const struct qmc_chan_param *param);
> +
> +int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length,
> +			  void (*complete)(void *context), void *context);
> +
> +int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length,
> +			 void (*complete)(void *context, size_t length),
> +			 void *context);
> +
> +#define QMC_CHAN_READ  (1<<0)
> +#define QMC_CHAN_WRITE (1<<1)
> +#define QMC_CHAN_ALL   (QMC_CHAN_READ | QMC_CHAN_WRITE)
> +
> +int qmc_chan_start(struct qmc_chan *chan, int direction);
> +int qmc_chan_stop(struct qmc_chan *chan, int direction);
> +int qmc_chan_reset(struct qmc_chan *chan, int direction);
> +
> +#endif /* __SOC_FSL_QMC_H__ */
Christophe Leroy Feb. 15, 2023, 4:10 p.m. UTC | #3
Hi Li and Qiang,

Le 26/01/2023 à 09:32, Herve Codina a écrit :
> The TSA (Time Slot Assigner) purpose is to route some
> TDM time-slots to other internal serial controllers.
> 
> It is available in some PowerQUICC SoC such as the
> MPC885 or MPC866.
> 
> It is also available on some Quicc Engine SoCs.
> This current version support CPM1 SoCs only and some
> enhancement are needed to support Quicc Engine SoCs.

Do you have any comment on this other patch ?

Otherwise, may I ask if you can send a Acked-by: so that the series can 
be merged in a relevant tree, most likely sound tree ?

Thanks
Christophe

> 
> Signed-off-by: Herve Codina <herve.codina@bootlin.com>
> ---
>   drivers/soc/fsl/qe/Kconfig  |  11 +
>   drivers/soc/fsl/qe/Makefile |   1 +
>   drivers/soc/fsl/qe/tsa.c    | 864 ++++++++++++++++++++++++++++++++++++
>   drivers/soc/fsl/qe/tsa.h    |  42 ++
>   4 files changed, 918 insertions(+)
>   create mode 100644 drivers/soc/fsl/qe/tsa.c
>   create mode 100644 drivers/soc/fsl/qe/tsa.h
> 
> diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig
> index 357c5800b112..60ec11c9f4d9 100644
> --- a/drivers/soc/fsl/qe/Kconfig
> +++ b/drivers/soc/fsl/qe/Kconfig
> @@ -33,6 +33,17 @@ config UCC
>   	bool
>   	default y if UCC_FAST || UCC_SLOW
>   
> +config CPM_TSA
> +	tristate "CPM TSA support"
> +	depends on OF && HAS_IOMEM
> +	depends on CPM1 || (PPC && COMPILE_TEST)
> +	help
> +	  Freescale CPM Time Slot Assigner (TSA)
> +	  controller.
> +
> +	  This option enables support for this
> +	  controller
> +
>   config QE_TDM
>   	bool
>   	default y if FSL_UCC_HDLC
> diff --git a/drivers/soc/fsl/qe/Makefile b/drivers/soc/fsl/qe/Makefile
> index 55a555304f3a..45c961acc81b 100644
> --- a/drivers/soc/fsl/qe/Makefile
> +++ b/drivers/soc/fsl/qe/Makefile
> @@ -4,6 +4,7 @@
>   #
>   obj-$(CONFIG_QUICC_ENGINE)+= qe.o qe_common.o qe_ic.o qe_io.o
>   obj-$(CONFIG_CPM)	+= qe_common.o
> +obj-$(CONFIG_CPM_TSA)	+= tsa.o
>   obj-$(CONFIG_UCC)	+= ucc.o
>   obj-$(CONFIG_UCC_SLOW)	+= ucc_slow.o
>   obj-$(CONFIG_UCC_FAST)	+= ucc_fast.o
> diff --git a/drivers/soc/fsl/qe/tsa.c b/drivers/soc/fsl/qe/tsa.c
> new file mode 100644
> index 000000000000..91b4c89fa5b3
> --- /dev/null
> +++ b/drivers/soc/fsl/qe/tsa.c
> @@ -0,0 +1,864 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * TSA driver
> + *
> + * Copyright 2022 CS GROUP France
> + *
> + * Author: Herve Codina <herve.codina@bootlin.com>
> + */
> +
> +#include "tsa.h"
> +#include <dt-bindings/soc/fsl,tsa.h>
> +#include <linux/clk.h>
> +#include <linux/io.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +
> +
> +/* TSA SI RAM routing tables entry */
> +#define TSA_SIRAM_ENTRY_LAST		(1 << 16)
> +#define TSA_SIRAM_ENTRY_BYTE		(1 << 17)
> +#define TSA_SIRAM_ENTRY_CNT(x)		(((x) & 0x0f) << 18)
> +#define TSA_SIRAM_ENTRY_CSEL_MASK	(0x7 << 22)
> +#define TSA_SIRAM_ENTRY_CSEL_NU		(0x0 << 22)
> +#define TSA_SIRAM_ENTRY_CSEL_SCC2	(0x2 << 22)
> +#define TSA_SIRAM_ENTRY_CSEL_SCC3	(0x3 << 22)
> +#define TSA_SIRAM_ENTRY_CSEL_SCC4	(0x4 << 22)
> +#define TSA_SIRAM_ENTRY_CSEL_SMC1	(0x5 << 22)
> +#define TSA_SIRAM_ENTRY_CSEL_SMC2	(0x6 << 22)
> +
> +/* SI mode register (32 bits) */
> +#define TSA_SIMODE	0x00
> +#define   TSA_SIMODE_SMC2			0x80000000
> +#define   TSA_SIMODE_SMC1			0x00008000
> +#define   TSA_SIMODE_TDMA(x)			((x) << 0)
> +#define   TSA_SIMODE_TDMB(x)			((x) << 16)
> +#define     TSA_SIMODE_TDM_MASK			0x0fff
> +#define     TSA_SIMODE_TDM_SDM_MASK		0x0c00
> +#define       TSA_SIMODE_TDM_SDM_NORM		0x0000
> +#define       TSA_SIMODE_TDM_SDM_ECHO		0x0400
> +#define       TSA_SIMODE_TDM_SDM_INTL_LOOP	0x0800
> +#define       TSA_SIMODE_TDM_SDM_LOOP_CTRL	0x0c00
> +#define     TSA_SIMODE_TDM_RFSD(x)		((x) << 8)
> +#define     TSA_SIMODE_TDM_DSC			0x0080
> +#define     TSA_SIMODE_TDM_CRT			0x0040
> +#define     TSA_SIMODE_TDM_STZ			0x0020
> +#define     TSA_SIMODE_TDM_CE			0x0010
> +#define     TSA_SIMODE_TDM_FE			0x0008
> +#define     TSA_SIMODE_TDM_GM			0x0004
> +#define     TSA_SIMODE_TDM_TFSD(x)		((x) << 0)
> +
> +/* SI global mode register (8 bits) */
> +#define TSA_SIGMR	0x04
> +#define TSA_SIGMR_ENB			(1<<3)
> +#define TSA_SIGMR_ENA			(1<<2)
> +#define TSA_SIGMR_RDM_MASK		0x03
> +#define   TSA_SIGMR_RDM_STATIC_TDMA	0x00
> +#define   TSA_SIGMR_RDM_DYN_TDMA	0x01
> +#define   TSA_SIGMR_RDM_STATIC_TDMAB	0x02
> +#define   TSA_SIGMR_RDM_DYN_TDMAB	0x03
> +
> +/* SI status register (8 bits) */
> +#define TSA_SISTR	0x06
> +
> +/* SI command register (8 bits) */
> +#define TSA_SICMR	0x07
> +
> +/* SI clock route register (32 bits) */
> +#define TSA_SICR	0x0C
> +#define   TSA_SICR_SCC2(x)		((x) << 8)
> +#define   TSA_SICR_SCC3(x)		((x) << 16)
> +#define   TSA_SICR_SCC4(x)		((x) << 24)
> +#define     TSA_SICR_SCC_MASK		0x0ff
> +#define     TSA_SICR_SCC_GRX		(1 << 7)
> +#define     TSA_SICR_SCC_SCX_TSA	(1 << 6)
> +#define     TSA_SICR_SCC_RXCS_MASK	(0x7 << 3)
> +#define       TSA_SICR_SCC_RXCS_BRG1	(0x0 << 3)
> +#define       TSA_SICR_SCC_RXCS_BRG2	(0x1 << 3)
> +#define       TSA_SICR_SCC_RXCS_BRG3	(0x2 << 3)
> +#define       TSA_SICR_SCC_RXCS_BRG4	(0x3 << 3)
> +#define       TSA_SICR_SCC_RXCS_CLK15	(0x4 << 3)
> +#define       TSA_SICR_SCC_RXCS_CLK26	(0x5 << 3)
> +#define       TSA_SICR_SCC_RXCS_CLK37	(0x6 << 3)
> +#define       TSA_SICR_SCC_RXCS_CLK48	(0x7 << 3)
> +#define     TSA_SICR_SCC_TXCS_MASK	(0x7 << 0)
> +#define       TSA_SICR_SCC_TXCS_BRG1	(0x0 << 0)
> +#define       TSA_SICR_SCC_TXCS_BRG2	(0x1 << 0)
> +#define       TSA_SICR_SCC_TXCS_BRG3	(0x2 << 0)
> +#define       TSA_SICR_SCC_TXCS_BRG4	(0x3 << 0)
> +#define       TSA_SICR_SCC_TXCS_CLK15	(0x4 << 0)
> +#define       TSA_SICR_SCC_TXCS_CLK26	(0x5 << 0)
> +#define       TSA_SICR_SCC_TXCS_CLK37	(0x6 << 0)
> +#define       TSA_SICR_SCC_TXCS_CLK48	(0x7 << 0)
> +
> +/* Serial interface RAM pointer register (32 bits) */
> +#define TSA_SIRP	0x10
> +
> +struct tsa_entries_area {
> +	void *__iomem entries_start;
> +	void *__iomem entries_next;
> +	void *__iomem last_entry;
> +};
> +
> +struct tsa_tdm {
> +	bool is_enable;
> +	struct clk *l1rclk_clk;
> +	struct clk *l1rsync_clk;
> +	struct clk *l1tclk_clk;
> +	struct clk *l1tsync_clk;
> +	u32 simode_tdm;
> +};
> +
> +#define TSA_TDMA	0
> +#define TSA_TDMB	1
> +
> +struct tsa {
> +	struct device *dev;
> +	void *__iomem si_regs;
> +	void *__iomem si_ram;
> +	resource_size_t si_ram_sz;
> +	spinlock_t	lock;
> +	int tdms; /* TSA_TDMx ORed */
> +	struct tsa_tdm tdm[2]; /* TDMa and TDMb */
> +	struct tsa_serial {
> +		unsigned int id;
> +		struct tsa_serial_info info;
> +	} serials[6];
> +};
> +
> +static inline struct tsa *tsa_serial_get_tsa(struct tsa_serial *tsa_serial)
> +{
> +	/* The serials table is indexed by the serial id */
> +	return container_of(tsa_serial, struct tsa, serials[tsa_serial->id]);
> +}
> +
> +static inline void tsa_write32(void *__iomem addr, u32 val)
> +{
> +	iowrite32be(val, addr);
> +}
> +
> +static inline u32 tsa_read32(void *__iomem addr)
> +{
> +	return ioread32be(addr);
> +}
> +
> +static inline void tsa_clrbits32(void *__iomem addr, u32 clr)
> +{
> +	tsa_write32(addr, tsa_read32(addr) & ~clr);
> +}
> +
> +static inline void tsa_clrsetbits32(void *__iomem addr, u32 clr, u32 set)
> +{
> +	tsa_write32(addr, (tsa_read32(addr) & ~clr) | set);
> +}
> +
> +int tsa_serial_connect(struct tsa_serial *tsa_serial)
> +{
> +	struct tsa *tsa = tsa_serial_get_tsa(tsa_serial);
> +	unsigned long flags;
> +	u32 clear;
> +	u32 set;
> +
> +	switch (tsa_serial->id) {
> +	case FSL_CPM_TSA_SCC2:
> +		clear = TSA_SICR_SCC2(TSA_SICR_SCC_MASK);
> +		set = TSA_SICR_SCC2(TSA_SICR_SCC_SCX_TSA);
> +		break;
> +	case FSL_CPM_TSA_SCC3:
> +		clear = TSA_SICR_SCC3(TSA_SICR_SCC_MASK);
> +		set = TSA_SICR_SCC3(TSA_SICR_SCC_SCX_TSA);
> +		break;
> +	case FSL_CPM_TSA_SCC4:
> +		clear = TSA_SICR_SCC4(TSA_SICR_SCC_MASK);
> +		set = TSA_SICR_SCC4(TSA_SICR_SCC_SCX_TSA);
> +		break;
> +	default:
> +		dev_err(tsa->dev, "Unsupported serial id %u\n", tsa_serial->id);
> +		return -EINVAL;
> +	}
> +
> +	spin_lock_irqsave(&tsa->lock, flags);
> +	tsa_clrsetbits32(tsa->si_regs + TSA_SICR, clear, set);
> +	spin_unlock_irqrestore(&tsa->lock, flags);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(tsa_serial_connect);
> +
> +int tsa_serial_disconnect(struct tsa_serial *tsa_serial)
> +{
> +	struct tsa *tsa = tsa_serial_get_tsa(tsa_serial);
> +	unsigned long flags;
> +	u32 clear;
> +
> +	switch (tsa_serial->id) {
> +	case FSL_CPM_TSA_SCC2:
> +		clear = TSA_SICR_SCC2(TSA_SICR_SCC_MASK);
> +		break;
> +	case FSL_CPM_TSA_SCC3:
> +		clear = TSA_SICR_SCC3(TSA_SICR_SCC_MASK);
> +		break;
> +	case FSL_CPM_TSA_SCC4:
> +		clear = TSA_SICR_SCC4(TSA_SICR_SCC_MASK);
> +		break;
> +	default:
> +		dev_err(tsa->dev, "Unsupported serial id %u\n", tsa_serial->id);
> +		return -EINVAL;
> +	}
> +
> +	spin_lock_irqsave(&tsa->lock, flags);
> +	tsa_clrsetbits32(tsa->si_regs + TSA_SICR, clear, 0);
> +	spin_unlock_irqrestore(&tsa->lock, flags);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(tsa_serial_disconnect);
> +
> +int tsa_serial_get_info(struct tsa_serial *tsa_serial, struct tsa_serial_info *info)
> +{
> +	memcpy(info, &tsa_serial->info, sizeof(*info));
> +	return 0;
> +}
> +EXPORT_SYMBOL(tsa_serial_get_info);
> +
> +static void tsa_init_entries_area(struct tsa *tsa, struct tsa_entries_area *area,
> +				  u32 tdms, u32 tdm_id, bool is_rx)
> +{
> +	resource_size_t quarter;
> +	resource_size_t half;
> +
> +	quarter = tsa->si_ram_sz/4;
> +	half = tsa->si_ram_sz/2;
> +
> +	if (tdms == BIT(TSA_TDMA)) {
> +		/* Only TDMA */
> +		if (is_rx) {
> +			/* First half of si_ram */
> +			area->entries_start = tsa->si_ram;
> +			area->entries_next = area->entries_start + half;
> +			area->last_entry = NULL;
> +		} else {
> +			/* Second half of si_ram */
> +			area->entries_start = tsa->si_ram + half;
> +			area->entries_next = area->entries_start + half;
> +			area->last_entry = NULL;
> +		}
> +	} else {
> +		/* Only TDMB or both TDMs */
> +		if (tdm_id == TSA_TDMA) {
> +			if (is_rx) {
> +				/* First half of first half of si_ram */
> +				area->entries_start = tsa->si_ram;
> +				area->entries_next = area->entries_start + quarter;
> +				area->last_entry = NULL;
> +			} else {
> +				/* First half of second half of si_ram */
> +				area->entries_start = tsa->si_ram + (2 * quarter);
> +				area->entries_next = area->entries_start + quarter;
> +				area->last_entry = NULL;
> +			}
> +		} else {
> +			if (is_rx) {
> +				/* Second half of first half of si_ram */
> +				area->entries_start = tsa->si_ram + quarter;
> +				area->entries_next = area->entries_start + quarter;
> +				area->last_entry = NULL;
> +			} else {
> +				/* Second half of second half of si_ram */
> +				area->entries_start = tsa->si_ram + (3 * quarter);
> +				area->entries_next = area->entries_start + quarter;
> +				area->last_entry = NULL;
> +			}
> +		}
> +	}
> +}
> +
> +static const char *tsa_serial_id2name(struct tsa *tsa, u32 serial_id)
> +{
> +	switch (serial_id) {
> +	case FSL_CPM_TSA_NU:	return "Not used";
> +	case FSL_CPM_TSA_SCC2:	return "SCC2";
> +	case FSL_CPM_TSA_SCC3:	return "SCC3";
> +	case FSL_CPM_TSA_SCC4:	return "SCC4";
> +	case FSL_CPM_TSA_SMC1:	return "SMC1";
> +	case FSL_CPM_TSA_SMC2:	return "SMC2";
> +	default:
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +static u32 tsa_serial_id2csel(struct tsa *tsa, u32 serial_id)
> +{
> +	switch (serial_id) {
> +	case FSL_CPM_TSA_SCC2:	return TSA_SIRAM_ENTRY_CSEL_SCC2;
> +	case FSL_CPM_TSA_SCC3:	return TSA_SIRAM_ENTRY_CSEL_SCC3;
> +	case FSL_CPM_TSA_SCC4:	return TSA_SIRAM_ENTRY_CSEL_SCC4;
> +	case FSL_CPM_TSA_SMC1:	return TSA_SIRAM_ENTRY_CSEL_SMC1;
> +	case FSL_CPM_TSA_SMC2:	return TSA_SIRAM_ENTRY_CSEL_SMC2;
> +	default:
> +		break;
> +	}
> +	return TSA_SIRAM_ENTRY_CSEL_NU;
> +}
> +
> +static int tsa_add_entry(struct tsa *tsa, struct tsa_entries_area *area,
> +			 u32 count, u32 serial_id)
> +{
> +	void *__iomem addr;
> +	u32 left;
> +	u32 val;
> +	u32 cnt;
> +	u32 nb;
> +
> +	addr = area->last_entry ? area->last_entry + 4 : area->entries_start;
> +
> +	nb = DIV_ROUND_UP(count, 8);
> +	if ((addr + (nb * 4)) > area->entries_next) {
> +		dev_err(tsa->dev, "si ram area full\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (area->last_entry) {
> +		/* Clear last flag */
> +		tsa_clrbits32(area->last_entry, TSA_SIRAM_ENTRY_LAST);
> +	}
> +
> +	left = count;
> +	while (left) {
> +		val = TSA_SIRAM_ENTRY_BYTE | tsa_serial_id2csel(tsa, serial_id);
> +
> +		if (left > 16) {
> +			cnt = 16;
> +		} else {
> +			cnt = left;
> +			val |= TSA_SIRAM_ENTRY_LAST;
> +			area->last_entry = addr;
> +		}
> +		val |= TSA_SIRAM_ENTRY_CNT(cnt - 1);
> +
> +		tsa_write32(addr, val);
> +		addr += 4;
> +		left -= cnt;
> +	}
> +
> +	return 0;
> +}
> +
> +static int tsa_of_parse_tdm_route(struct tsa *tsa, struct device_node *tdm_np,
> +				  u32 tdms, u32 tdm_id, bool is_rx)
> +{
> +	struct tsa_entries_area area;
> +	const char *route_name;
> +	u32 serial_id;
> +	int len, i;
> +	u32 count;
> +	const char *serial_name;
> +	struct tsa_serial_info *serial_info;
> +	struct tsa_tdm *tdm;
> +	int ret;
> +	u32 ts;
> +
> +	route_name = is_rx ? "fsl,rx-ts-routes" : "fsl,tx-ts-routes";
> +
> +	len = of_property_count_u32_elems(tdm_np,  route_name);
> +	if (len < 0) {
> +		dev_err(tsa->dev, "%pOF: failed to read %s\n", tdm_np, route_name);
> +		return len;
> +	}
> +	if (len % 2 != 0) {
> +		dev_err(tsa->dev, "%pOF: wrong %s format\n", tdm_np, route_name);
> +		return -EINVAL;
> +	}
> +
> +	tsa_init_entries_area(tsa, &area, tdms, tdm_id, is_rx);
> +	ts = 0;
> +	for (i = 0; i < len; i += 2) {
> +		of_property_read_u32_index(tdm_np, route_name, i, &count);
> +		of_property_read_u32_index(tdm_np, route_name, i + 1, &serial_id);
> +
> +		if (serial_id >= ARRAY_SIZE(tsa->serials)) {
> +			dev_err(tsa->dev, "%pOF: invalid serial id (%u)\n",
> +				tdm_np, serial_id);
> +			return -EINVAL;
> +		}
> +
> +		serial_name = tsa_serial_id2name(tsa, serial_id);
> +		if (!serial_name) {
> +			dev_err(tsa->dev, "%pOF: unsupported serial id (%u)\n",
> +				tdm_np, serial_id);
> +			return -EINVAL;
> +		}
> +
> +		dev_dbg(tsa->dev, "tdm_id=%u, %s ts %u..%u -> %s\n",
> +			tdm_id, route_name, ts, ts+count-1, serial_name);
> +		ts += count;
> +
> +		ret = tsa_add_entry(tsa, &area, count, serial_id);
> +		if (ret)
> +			return ret;
> +
> +		serial_info = &tsa->serials[serial_id].info;
> +		tdm = &tsa->tdm[tdm_id];
> +		if (is_rx) {
> +			serial_info->rx_fs_rate = clk_get_rate(tdm->l1rsync_clk);
> +			serial_info->rx_bit_rate = clk_get_rate(tdm->l1rclk_clk);
> +			serial_info->nb_rx_ts += count;
> +		} else {
> +			serial_info->tx_fs_rate = tdm->l1tsync_clk ?
> +				clk_get_rate(tdm->l1tsync_clk) :
> +				clk_get_rate(tdm->l1rsync_clk);
> +			serial_info->tx_bit_rate = tdm->l1tclk_clk ?
> +				clk_get_rate(tdm->l1tclk_clk) :
> +				clk_get_rate(tdm->l1rclk_clk);
> +			serial_info->nb_tx_ts += count;
> +		}
> +	}
> +	return 0;
> +}
> +
> +static inline int tsa_of_parse_tdm_rx_route(struct tsa *tsa,
> +					    struct device_node *tdm_np,
> +					    u32 tdms, u32 tdm_id)
> +{
> +	return tsa_of_parse_tdm_route(tsa, tdm_np, tdms, tdm_id, true);
> +}
> +
> +static inline int tsa_of_parse_tdm_tx_route(struct tsa *tsa,
> +					    struct device_node *tdm_np,
> +					    u32 tdms, u32 tdm_id)
> +{
> +	return tsa_of_parse_tdm_route(tsa, tdm_np, tdms, tdm_id, false);
> +}
> +
> +static int tsa_of_parse_tdms(struct tsa *tsa, struct device_node *np)
> +{
> +	struct device_node *tdm_np;
> +	struct tsa_tdm *tdm;
> +	struct clk *clk;
> +	const char *mode;
> +	u32 tdm_id, val;
> +	int ret;
> +	int i;
> +
> +	tsa->tdms = 0;
> +	tsa->tdm[0].is_enable = false;
> +	tsa->tdm[1].is_enable = false;
> +
> +	for_each_available_child_of_node(np, tdm_np) {
> +		ret = of_property_read_u32(tdm_np, "reg", &tdm_id);
> +		if (ret) {
> +			dev_err(tsa->dev, "%pOF: failed to read reg\n", tdm_np);
> +			of_node_put(tdm_np);
> +			return ret;
> +		}
> +		switch (tdm_id) {
> +		case 0:
> +			tsa->tdms |= BIT(TSA_TDMA);
> +			break;
> +		case 1:
> +			tsa->tdms |= BIT(TSA_TDMB);
> +			break;
> +		default:
> +			dev_err(tsa->dev, "%pOF: Invalid tdm_id (%u)\n", tdm_np,
> +				tdm_id);
> +			of_node_put(tdm_np);
> +			return -EINVAL;
> +		}
> +	}
> +
> +	for_each_available_child_of_node(np, tdm_np) {
> +		ret = of_property_read_u32(tdm_np, "reg", &tdm_id);
> +		if (ret) {
> +			dev_err(tsa->dev, "%pOF: failed to read reg\n", tdm_np);
> +			of_node_put(tdm_np);
> +			return ret;
> +		}
> +
> +		tdm = &tsa->tdm[tdm_id];
> +
> +		mode = "disabled";
> +		ret = of_property_read_string(tdm_np, "fsl,diagnostic-mode", &mode);
> +		if (ret && ret != -EINVAL) {
> +			dev_err(tsa->dev, "%pOF: failed to read fsl,diagnostic-mode\n",
> +				tdm_np);
> +			of_node_put(tdm_np);
> +			return ret;
> +		}
> +		if (!strcmp(mode, "disabled")) {
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_SDM_NORM;
> +		} else if (!strcmp(mode, "echo")) {
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_SDM_ECHO;
> +		} else if (!strcmp(mode, "internal-loopback")) {
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_SDM_INTL_LOOP;
> +		} else if (!strcmp(mode, "control-loopback")) {
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_SDM_LOOP_CTRL;
> +		} else {
> +			dev_err(tsa->dev, "%pOF: Invalid fsl,diagnostic-mode (%s)\n",
> +				tdm_np, mode);
> +			of_node_put(tdm_np);
> +			return -EINVAL;
> +		}
> +
> +		val = 0;
> +		ret = of_property_read_u32(tdm_np, "fsl,rx-frame-sync-delay-bits",
> +					   &val);
> +		if (ret && ret != -EINVAL) {
> +			dev_err(tsa->dev,
> +				"%pOF: failed to read fsl,rx-frame-sync-delay-bits\n",
> +				tdm_np);
> +			of_node_put(tdm_np);
> +			return ret;
> +		}
> +		if (val > 3) {
> +			dev_err(tsa->dev,
> +				"%pOF: Invalid fsl,rx-frame-sync-delay-bits (%u)\n",
> +				tdm_np, val);
> +			of_node_put(tdm_np);
> +			return -EINVAL;
> +		}
> +		tdm->simode_tdm |= TSA_SIMODE_TDM_RFSD(val);
> +
> +		val = 0;
> +		ret = of_property_read_u32(tdm_np, "fsl,tx-frame-sync-delay-bits",
> +					   &val);
> +		if (ret && ret != -EINVAL) {
> +			dev_err(tsa->dev,
> +				"%pOF: failed to read fsl,tx-frame-sync-delay-bits\n",
> +				tdm_np);
> +			of_node_put(tdm_np);
> +			return ret;
> +		}
> +		if (val > 3) {
> +			dev_err(tsa->dev,
> +				"%pOF: Invalid fsl,tx-frame-sync-delay-bits (%u)\n",
> +				tdm_np, val);
> +			of_node_put(tdm_np);
> +			return -EINVAL;
> +		}
> +		tdm->simode_tdm |= TSA_SIMODE_TDM_TFSD(val);
> +
> +		if (of_property_read_bool(tdm_np, "fsl,common-rxtx-pins"))
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_CRT;
> +
> +		if (of_property_read_bool(tdm_np, "fsl,clock-falling-edge"))
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_CE;
> +
> +		if (of_property_read_bool(tdm_np, "fsl,fsync-rising-edge"))
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_FE;
> +
> +		if (of_property_read_bool(tdm_np, "fsl,double-speed-clock"))
> +			tdm->simode_tdm |= TSA_SIMODE_TDM_DSC;
> +
> +		clk = of_clk_get_by_name(tdm_np, "l1rsync");
> +		if (IS_ERR(clk)) {
> +			ret = PTR_ERR(clk);
> +			of_node_put(tdm_np);
> +			goto err;
> +		}
> +		ret = clk_prepare_enable(clk);
> +		if (ret) {
> +			clk_put(clk);
> +			of_node_put(tdm_np);
> +			goto err;
> +		}
> +		tdm->l1rsync_clk = clk;
> +
> +		clk = of_clk_get_by_name(tdm_np, "l1rclk");
> +		if (IS_ERR(clk)) {
> +			ret = PTR_ERR(clk);
> +			of_node_put(tdm_np);
> +			goto err;
> +		}
> +		ret = clk_prepare_enable(clk);
> +		if (ret) {
> +			clk_put(clk);
> +			of_node_put(tdm_np);
> +			goto err;
> +		}
> +		tdm->l1rclk_clk = clk;
> +
> +		if (!(tdm->simode_tdm & TSA_SIMODE_TDM_CRT)) {
> +			clk = of_clk_get_by_name(tdm_np, "l1tsync");
> +			if (IS_ERR(clk)) {
> +				ret = PTR_ERR(clk);
> +				of_node_put(tdm_np);
> +				goto err;
> +			}
> +			ret = clk_prepare_enable(clk);
> +			if (ret) {
> +				clk_put(clk);
> +				of_node_put(tdm_np);
> +				goto err;
> +			}
> +			tdm->l1tsync_clk = clk;
> +
> +			clk = of_clk_get_by_name(tdm_np, "l1tclk");
> +			if (IS_ERR(clk)) {
> +				ret = PTR_ERR(clk);
> +				of_node_put(tdm_np);
> +				goto err;
> +			}
> +			ret = clk_prepare_enable(clk);
> +			if (ret) {
> +				clk_put(clk);
> +				of_node_put(tdm_np);
> +				goto err;
> +			}
> +			tdm->l1tclk_clk = clk;
> +		}
> +
> +		ret = tsa_of_parse_tdm_rx_route(tsa, tdm_np, tsa->tdms, tdm_id);
> +		if (ret) {
> +			of_node_put(tdm_np);
> +			goto err;
> +		}
> +
> +		ret = tsa_of_parse_tdm_tx_route(tsa, tdm_np, tsa->tdms, tdm_id);
> +		if (ret) {
> +			of_node_put(tdm_np);
> +			goto err;
> +		}
> +
> +		tdm->is_enable = true;
> +	}
> +	return 0;
> +
> +err:
> +	for (i = 0; i < 2; i++) {
> +		if (tsa->tdm[i].l1rsync_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rsync_clk);
> +			clk_put(tsa->tdm[i].l1rsync_clk);
> +		}
> +		if (tsa->tdm[i].l1rclk_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rclk_clk);
> +			clk_put(tsa->tdm[i].l1rclk_clk);
> +		}
> +		if (tsa->tdm[i].l1tsync_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rsync_clk);
> +			clk_put(tsa->tdm[i].l1rsync_clk);
> +		}
> +		if (tsa->tdm[i].l1tclk_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rclk_clk);
> +			clk_put(tsa->tdm[i].l1rclk_clk);
> +		}
> +	}
> +	return ret;
> +}
> +
> +static void tsa_init_si_ram(struct tsa *tsa)
> +{
> +	resource_size_t i;
> +
> +	/* Fill all entries as the last one */
> +	for (i = 0; i < tsa->si_ram_sz; i += 4)
> +		tsa_write32(tsa->si_ram + i, TSA_SIRAM_ENTRY_LAST);
> +}
> +
> +static int tsa_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	struct resource *res;
> +	struct tsa *tsa;
> +	unsigned int i;
> +	u32 val;
> +	int ret;
> +
> +	tsa = devm_kzalloc(&pdev->dev, sizeof(*tsa), GFP_KERNEL);
> +	if (!tsa)
> +		return -ENOMEM;
> +
> +	tsa->dev = &pdev->dev;
> +
> +	for (i = 0; i < ARRAY_SIZE(tsa->serials); i++)
> +		tsa->serials[i].id = i;
> +
> +	spin_lock_init(&tsa->lock);
> +
> +	tsa->si_regs = devm_platform_ioremap_resource_byname(pdev, "si_regs");
> +	if (IS_ERR(tsa->si_regs))
> +		return PTR_ERR(tsa->si_regs);
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "si_ram");
> +	if (!res) {
> +		dev_err(tsa->dev, "si_ram resource missing\n");
> +		return -EINVAL;
> +	}
> +	tsa->si_ram_sz = resource_size(res);
> +	tsa->si_ram = devm_ioremap_resource(&pdev->dev, res);
> +	if (IS_ERR(tsa->si_ram))
> +		return PTR_ERR(tsa->si_ram);
> +
> +	tsa_init_si_ram(tsa);
> +
> +	ret = tsa_of_parse_tdms(tsa, np);
> +	if (ret)
> +		return ret;
> +
> +	/* Set SIMODE */
> +	val = 0;
> +	if (tsa->tdm[0].is_enable)
> +		val |= TSA_SIMODE_TDMA(tsa->tdm[0].simode_tdm);
> +	if (tsa->tdm[1].is_enable)
> +		val |= TSA_SIMODE_TDMB(tsa->tdm[1].simode_tdm);
> +
> +	tsa_clrsetbits32(tsa->si_regs + TSA_SIMODE,
> +			 TSA_SIMODE_TDMA(TSA_SIMODE_TDM_MASK) |
> +			 TSA_SIMODE_TDMB(TSA_SIMODE_TDM_MASK),
> +			 val);
> +
> +	/* Set SIGMR */
> +	val = (tsa->tdms == BIT(TSA_TDMA)) ?
> +		TSA_SIGMR_RDM_STATIC_TDMA : TSA_SIGMR_RDM_STATIC_TDMAB;
> +	if (tsa->tdms & BIT(TSA_TDMA))
> +		val |= TSA_SIGMR_ENA;
> +	if (tsa->tdms & BIT(TSA_TDMB))
> +		val |= TSA_SIGMR_ENB;
> +	out_8(tsa->si_regs + TSA_SIGMR, val);
> +
> +	platform_set_drvdata(pdev, tsa);
> +
> +	return 0;
> +}
> +
> +static int tsa_remove(struct platform_device *pdev)
> +{
> +	struct tsa *tsa = platform_get_drvdata(pdev);
> +	int i;
> +
> +	for (i = 0; i < 2; i++) {
> +		if (tsa->tdm[i].l1rsync_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rsync_clk);
> +			clk_put(tsa->tdm[i].l1rsync_clk);
> +		}
> +		if (tsa->tdm[i].l1rclk_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rclk_clk);
> +			clk_put(tsa->tdm[i].l1rclk_clk);
> +		}
> +		if (tsa->tdm[i].l1tsync_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rsync_clk);
> +			clk_put(tsa->tdm[i].l1rsync_clk);
> +		}
> +		if (tsa->tdm[i].l1tclk_clk) {
> +			clk_disable_unprepare(tsa->tdm[i].l1rclk_clk);
> +			clk_put(tsa->tdm[i].l1rclk_clk);
> +		}
> +	}
> +	return 0;
> +}
> +
> +static const struct of_device_id tsa_id_table[] = {
> +	{ .compatible = "fsl,cpm1-tsa" },
> +	{} /* sentinel */
> +};
> +MODULE_DEVICE_TABLE(of, tsa_id_table);
> +
> +static struct platform_driver tsa_driver = {
> +	.driver = {
> +		.name = "fsl-tsa",
> +		.of_match_table = of_match_ptr(tsa_id_table),
> +	},
> +	.probe = tsa_probe,
> +	.remove = tsa_remove,
> +};
> +module_platform_driver(tsa_driver);
> +
> +struct tsa_serial *tsa_serial_get_byphandle(struct device_node *np,
> +					    const char *phandle_name)
> +{
> +	struct of_phandle_args out_args;
> +	struct platform_device *pdev;
> +	struct tsa_serial *tsa_serial;
> +	struct tsa *tsa;
> +	int ret;
> +
> +	ret = of_parse_phandle_with_fixed_args(np, phandle_name, 1, 0, &out_args);
> +	if (ret < 0)
> +		return ERR_PTR(ret);
> +
> +	if (!of_match_node(tsa_driver.driver.of_match_table, out_args.np)) {
> +		of_node_put(out_args.np);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	pdev = of_find_device_by_node(out_args.np);
> +	of_node_put(out_args.np);
> +	if (!pdev)
> +		return ERR_PTR(-ENODEV);
> +
> +	tsa = platform_get_drvdata(pdev);
> +	if (!tsa) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EPROBE_DEFER);
> +	}
> +
> +	if (out_args.args_count != 1) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	if (out_args.args[0] >= ARRAY_SIZE(tsa->serials)) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	tsa_serial = &tsa->serials[out_args.args[0]];
> +
> +	/*
> +	 * Be sure that the serial id matches the phandle arg.
> +	 * The tsa_serials table is indexed by serial ids. The serial id is set
> +	 * during the probe() call and needs to be coherent.
> +	 */
> +	if (WARN_ON(tsa_serial->id != out_args.args[0])) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	return tsa_serial;
> +}
> +EXPORT_SYMBOL(tsa_serial_get_byphandle);
> +
> +void tsa_serial_put(struct tsa_serial *tsa_serial)
> +{
> +	struct tsa *tsa = tsa_serial_get_tsa(tsa_serial);
> +
> +	put_device(tsa->dev);
> +}
> +EXPORT_SYMBOL(tsa_serial_put);
> +
> +static void devm_tsa_serial_release(struct device *dev, void *res)
> +{
> +	struct tsa_serial **tsa_serial = res;
> +
> +	tsa_serial_put(*tsa_serial);
> +}
> +
> +struct tsa_serial *devm_tsa_serial_get_byphandle(struct device *dev,
> +						 struct device_node *np,
> +						 const char *phandle_name)
> +{
> +	struct tsa_serial *tsa_serial;
> +	struct tsa_serial **dr;
> +
> +	dr = devres_alloc(devm_tsa_serial_release, sizeof(*dr), GFP_KERNEL);
> +	if (!dr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	tsa_serial = tsa_serial_get_byphandle(np, phandle_name);
> +	if (!IS_ERR(tsa_serial)) {
> +		*dr = tsa_serial;
> +		devres_add(dev, dr);
> +	} else {
> +		devres_free(dr);
> +	}
> +
> +	return tsa_serial;
> +}
> +EXPORT_SYMBOL(devm_tsa_serial_get_byphandle);
> +
> +MODULE_AUTHOR("Herve Codina <herve.codina@bootlin.com>");
> +MODULE_DESCRIPTION("CPM TSA driver");
> +MODULE_LICENSE("GPL");
> diff --git a/drivers/soc/fsl/qe/tsa.h b/drivers/soc/fsl/qe/tsa.h
> new file mode 100644
> index 000000000000..030e79bb978a
> --- /dev/null
> +++ b/drivers/soc/fsl/qe/tsa.h
> @@ -0,0 +1,42 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * TSA management
> + *
> + * Copyright 2022 CS GROUP France
> + *
> + * Author: Herve Codina <herve.codina@bootlin.com>
> + */
> +#ifndef __SOC_FSL_TSA_H__
> +#define __SOC_FSL_TSA_H__
> +
> +#include <linux/types.h>
> +
> +struct device_node;
> +struct device;
> +struct tsa_serial;
> +
> +struct tsa_serial *tsa_serial_get_byphandle(struct device_node *np,
> +					    const char *phandle_name);
> +void tsa_serial_tsa_put(struct tsa_serial *tsa_serial);
> +struct tsa_serial *devm_tsa_serial_get_byphandle(struct device *dev,
> +						 struct device_node *np,
> +						 const char *phandle_name);
> +
> +/* Connect and disconnect the TSA serial */
> +int tsa_serial_connect(struct tsa_serial *tsa_serial);
> +int tsa_serial_disconnect(struct tsa_serial *tsa_serial);
> +
> +/* Cell information */
> +struct tsa_serial_info {
> +	unsigned long rx_fs_rate;
> +	unsigned long rx_bit_rate;
> +	u8 nb_rx_ts;
> +	unsigned long tx_fs_rate;
> +	unsigned long tx_bit_rate;
> +	u8 nb_tx_ts;
> +};
> +
> +/* Get information */
> +int tsa_serial_get_info(struct tsa_serial *tsa_serial, struct tsa_serial_info *info);
> +
> +#endif /* __SOC_FSL_TSA_H__ */
Leo Li Feb. 15, 2023, 9:42 p.m. UTC | #4
> -----Original Message-----
> From: Christophe Leroy <christophe.leroy@csgroup.eu>
> Sent: Wednesday, February 15, 2023 10:08 AM
> To: Leo Li <leoyang.li@nxp.com>; Qiang Zhao <qiang.zhao@nxp.com>
> Cc: linuxppc-dev@lists.ozlabs.org; Krzysztof Kozlowski
> <krzysztof.kozlowski+dt@linaro.org>; Rob Herring <robh+dt@kernel.org>;
> Herve Codina <herve.codina@bootlin.com>; linux-arm-
> kernel@lists.infradead.org; devicetree@vger.kernel.org; linux-
> kernel@vger.kernel.org; Nicholas Piggin <npiggin@gmail.com>; Fabio
> Estevam <festevam@gmail.com>; Xiubo Li <Xiubo.Lee@gmail.com>;
> Shengjiu Wang <shengjiu.wang@gmail.com>; Takashi Iwai
> <tiwai@suse.com>; Jaroslav Kysela <perex@perex.cz>; Michael Ellerman
> <mpe@ellerman.id.au>; Mark Brown <broonie@kernel.org>; Liam Girdwood
> <lgirdwood@gmail.com>; alsa-devel@alsa-project.org; Thomas Petazzoni
> <thomas.petazzoni@bootlin.com>; Nicolin Chen <nicoleotsuka@gmail.com>
> Subject: Re: [PATCH v4 06/10] soc: fsl: cmp1: Add support for QMC
> 
> Hi Li and Qiang
> 
> Le 26/01/2023 à 09:32, Herve Codina a écrit :
> > The QMC (QUICC Multichannel Controller) emulates up to 64 channels
> > within one serial controller using the same TDM physical interface
> > routed from the TSA.
> >
> > It is available in some	PowerQUICC SoC such as the
> > MPC885 or MPC866.
> >
> > It is also available on some Quicc Engine SoCs.
> > This current version support CPM1 SoCs only and some enhancement are
> > needed to support Quicc Engine SoCs.
> 
> Do you have any comment on this patch ?
> 
> Otherwise, may I ask you to send your Acked-by: so that the series can be
> merged in a relevant tree, most likely sound tree ?

Sure.  I will give it a review.

> 
> Thanks
> Christophe
> 
> >
> > Signed-off-by: Herve Codina <herve.codina@bootlin.com>
> > ---
> >   drivers/soc/fsl/qe/Kconfig  |   12 +
> >   drivers/soc/fsl/qe/Makefile |    1 +
> >   drivers/soc/fsl/qe/qmc.c    | 1533
> +++++++++++++++++++++++++++++++++++
> >   include/soc/fsl/qe/qmc.h    |   71 ++
> >   4 files changed, 1617 insertions(+)
> >   create mode 100644 drivers/soc/fsl/qe/qmc.c
> >   create mode 100644 include/soc/fsl/qe/qmc.h
> >
> > diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig
> > index 60ec11c9f4d9..25b218351ae3 100644
> > --- a/drivers/soc/fsl/qe/Kconfig
> > +++ b/drivers/soc/fsl/qe/Kconfig
> > @@ -44,6 +44,18 @@ config CPM_TSA
> >   	  This option enables support for this
> >   	  controller
> >
> > +config CPM_QMC
> > +	tristate "CPM QMC support"
> > +	depends on OF && HAS_IOMEM
> > +	depends on CPM1 || (PPC && COMPILE_TEST)
> > +	depends on CPM_TSA
> > +	help
> > +	  Freescale CPM QUICC Multichannel Controller
> > +	  (QMC)
> > +
> > +	  This option enables support for this
> > +	  controller
> > +
> >   config QE_TDM
> >   	bool
> >   	default y if FSL_UCC_HDLC
> > diff --git a/drivers/soc/fsl/qe/Makefile b/drivers/soc/fsl/qe/Makefile
> > index 45c961acc81b..ec8506e13113 100644
> > --- a/drivers/soc/fsl/qe/Makefile
> > +++ b/drivers/soc/fsl/qe/Makefile
> > @@ -5,6 +5,7 @@
> >   obj-$(CONFIG_QUICC_ENGINE)+= qe.o qe_common.o qe_ic.o qe_io.o
> >   obj-$(CONFIG_CPM)	+= qe_common.o
> >   obj-$(CONFIG_CPM_TSA)	+= tsa.o
> > +obj-$(CONFIG_CPM_QMC)	+= qmc.o
> >   obj-$(CONFIG_UCC)	+= ucc.o
> >   obj-$(CONFIG_UCC_SLOW)	+= ucc_slow.o
> >   obj-$(CONFIG_UCC_FAST)	+= ucc_fast.o
> > diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c new
> > file mode 100644 index 000000000000..cfa7207353e0
> > --- /dev/null
> > +++ b/drivers/soc/fsl/qe/qmc.c
> > @@ -0,0 +1,1533 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * QMC driver
> > + *
> > + * Copyright 2022 CS GROUP France
> > + *
> > + * Author: Herve Codina <herve.codina@bootlin.com>  */
> > +
> > +#include <soc/fsl/qe/qmc.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/hdlc.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/io.h>
> > +#include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/of_platform.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/slab.h>
> > +#include <soc/fsl/cpm.h>
> > +#include <sysdev/fsl_soc.h>
> > +#include "tsa.h"
> > +
> > +/* SCC general mode register high (32 bits) */
> > +#define SCC_GSMRL	0x00
> > +#define SCC_GSMRL_ENR		(1 << 5)
> > +#define SCC_GSMRL_ENT		(1 << 4)
> > +#define SCC_GSMRL_MODE_QMC	(0x0A << 0)
> > +
> > +/* SCC general mode register low (32 bits) */
> > +#define SCC_GSMRH	0x04
> > +#define   SCC_GSMRH_CTSS	(1 << 7)
> > +#define   SCC_GSMRH_CDS		(1 << 8)
> > +#define   SCC_GSMRH_CTSP	(1 << 9)
> > +#define   SCC_GSMRH_CDP		(1 << 10)
> > +
> > +/* SCC event register (16 bits) */
> > +#define SCC_SCCE	0x10
> > +#define   SCC_SCCE_IQOV		(1 << 3)
> > +#define   SCC_SCCE_GINT		(1 << 2)
> > +#define   SCC_SCCE_GUN		(1 << 1)
> > +#define   SCC_SCCE_GOV		(1 << 0)
> > +
> > +/* SCC mask register (16 bits) */
> > +#define SCC_SCCM	0x14
> > +/* Multichannel base pointer (32 bits) */
> > +#define QMC_GBL_MCBASE		0x00
> > +/* Multichannel controller state (16 bits) */
> > +#define QMC_GBL_QMCSTATE	0x04
> > +/* Maximum receive buffer length (16 bits) */
> > +#define QMC_GBL_MRBLR		0x06
> > +/* Tx time-slot assignment table pointer (16 bits) */
> > +#define QMC_GBL_TX_S_PTR	0x08
> > +/* Rx pointer (16 bits) */
> > +#define QMC_GBL_RXPTR		0x0A
> > +/* Global receive frame threshold (16 bits) */
> > +#define QMC_GBL_GRFTHR		0x0C
> > +/* Global receive frame count (16 bits) */
> > +#define QMC_GBL_GRFCNT		0x0E
> > +/* Multichannel interrupt base address (32 bits) */
> > +#define QMC_GBL_INTBASE		0x10
> > +/* Multichannel interrupt pointer (32 bits) */
> > +#define QMC_GBL_INTPTR		0x14
> > +/* Rx time-slot assignment table pointer (16 bits) */
> > +#define QMC_GBL_RX_S_PTR	0x18
> > +/* Tx pointer (16 bits) */
> > +#define QMC_GBL_TXPTR		0x1A
> > +/* CRC constant (32 bits) */
> > +#define QMC_GBL_C_MASK32	0x1C
> > +/* Time slot assignment table Rx (32 x 16 bits) */
> > +#define QMC_GBL_TSATRX		0x20
> > +/* Time slot assignment table Tx (32 x 16 bits) */
> > +#define QMC_GBL_TSATTX		0x60
> > +/* CRC constant (16 bits) */
> > +#define QMC_GBL_C_MASK16	0xA0
> > +
> > +/* TSA entry (16bit entry in TSATRX and TSATTX) */
> > +#define QMC_TSA_VALID		(1 << 15)
> > +#define QMC_TSA_WRAP		(1 << 14)
> > +#define QMC_TSA_MASK		(0x303F)
> > +#define QMC_TSA_CHANNEL(x)	((x) << 6)
> > +
> > +/* Tx buffer descriptor base address (16 bits, offset from MCBASE) */
> > +#define QMC_SPE_TBASE	0x00
> > +
> > +/* Channel mode register (16 bits) */
> > +#define QMC_SPE_CHAMR	0x02
> > +#define   QMC_SPE_CHAMR_MODE_HDLC	(1 << 15)
> > +#define   QMC_SPE_CHAMR_MODE_TRANSP	((0 << 15) | (1 << 13))
> > +#define   QMC_SPE_CHAMR_ENT		(1 << 12)
> > +#define   QMC_SPE_CHAMR_POL		(1 << 8)
> > +#define   QMC_SPE_CHAMR_HDLC_IDLM	(1 << 13)
> > +#define   QMC_SPE_CHAMR_HDLC_CRC	(1 << 7)
> > +#define   QMC_SPE_CHAMR_HDLC_NOF	(0x0f << 0)
> > +#define   QMC_SPE_CHAMR_TRANSP_RD	(1 << 14)
> > +#define   QMC_SPE_CHAMR_TRANSP_SYNC	(1 << 10)
> > +
> > +/* Tx internal state (32 bits) */
> > +#define QMC_SPE_TSTATE	0x04
> > +/* Tx buffer descriptor pointer (16 bits) */
> > +#define QMC_SPE_TBPTR	0x0C
> > +/* Zero-insertion state (32 bits) */
> > +#define QMC_SPE_ZISTATE	0x14
> > +/* Channel’s interrupt mask flags (16 bits) */
> > +#define QMC_SPE_INTMSK	0x1C
> > +/* Rx buffer descriptor base address (16 bits, offset from MCBASE) */
> > +#define QMC_SPE_RBASE	0x20
> > +/* HDLC: Maximum frame length register (16 bits) */
> > +#define QMC_SPE_MFLR	0x22
> > +/* TRANSPARENT: Transparent maximum receive length (16 bits) */
> > +#define QMC_SPE_TMRBLR	0x22
> > +/* Rx internal state (32 bits) */
> > +#define QMC_SPE_RSTATE	0x24
> > +/* Rx buffer descriptor pointer (16 bits) */
> > +#define QMC_SPE_RBPTR	0x2C
> > +/* Packs 4 bytes to 1 long word before writing to buffer (32 bits) */
> > +#define QMC_SPE_RPACK	0x30
> > +/* Zero deletion state (32 bits) */
> > +#define QMC_SPE_ZDSTATE	0x34
> > +
> > +/* Transparent synchronization (16 bits) */ #define QMC_SPE_TRNSYNC
> > +0x3C
> > +#define   QMC_SPE_TRNSYNC_RX(x)	((x) << 8)
> > +#define   QMC_SPE_TRNSYNC_TX(x)	((x) << 0)
> > +
> > +/* Interrupt related registers bits */
> > +#define QMC_INT_V		(1 << 15)
> > +#define QMC_INT_W		(1 << 14)
> > +#define QMC_INT_NID		(1 << 13)
> > +#define QMC_INT_IDL		(1 << 12)
> > +#define QMC_INT_GET_CHANNEL(x)	(((x) & 0x0FC0) >> 6)
> > +#define QMC_INT_MRF		(1 << 5)
> > +#define QMC_INT_UN		(1 << 4)
> > +#define QMC_INT_RXF		(1 << 3)
> > +#define QMC_INT_BSY		(1 << 2)
> > +#define QMC_INT_TXB		(1 << 1)
> > +#define QMC_INT_RXB		(1 << 0)
> > +
> > +/* BD related registers bits */
> > +#define QMC_BD_RX_E	(1 << 15)
> > +#define QMC_BD_RX_W	(1 << 13)
> > +#define QMC_BD_RX_I	(1 << 12)
> > +#define QMC_BD_RX_L	(1 << 11)
> > +#define QMC_BD_RX_F	(1 << 10)
> > +#define QMC_BD_RX_CM	(1 << 9)
> > +#define QMC_BD_RX_UB	(1 << 7)
> > +#define QMC_BD_RX_LG	(1 << 5)
> > +#define QMC_BD_RX_NO	(1 << 4)
> > +#define QMC_BD_RX_AB	(1 << 3)
> > +#define QMC_BD_RX_CR	(1 << 2)
> > +
> > +#define QMC_BD_TX_R	(1 << 15)
> > +#define QMC_BD_TX_W	(1 << 13)
> > +#define QMC_BD_TX_I	(1 << 12)
> > +#define QMC_BD_TX_L	(1 << 11)
> > +#define QMC_BD_TX_TC	(1 << 10)
> > +#define QMC_BD_TX_CM	(1 << 9)
> > +#define QMC_BD_TX_UB	(1 << 7)
> > +#define QMC_BD_TX_PAD	(0x0f << 0)
> > +
> > +/* Numbers of BDs and interrupt items */
> > +#define QMC_NB_TXBDS	8
> > +#define QMC_NB_RXBDS	8
> > +#define QMC_NB_INTS	128
> > +
> > +struct qmc_xfer_desc {
> > +	union {
> > +		void (*tx_complete)(void *context);
> > +		void (*rx_complete)(void *context, size_t length);
> > +	};
> > +	void *context;
> > +};
> > +
> > +struct qmc_chan {
> > +	struct list_head list;
> > +	unsigned int id;
> > +	struct qmc *qmc;
> > +	void *__iomem s_param;
> > +	enum qmc_mode mode;
> > +	u64	tx_ts_mask;
> > +	u64	rx_ts_mask;
> > +	bool is_reverse_data;
> > +
> > +	spinlock_t	tx_lock;
> > +	cbd_t __iomem *txbds;
> > +	cbd_t __iomem *txbd_free;
> > +	cbd_t __iomem *txbd_done;
> > +	struct qmc_xfer_desc tx_desc[QMC_NB_TXBDS];
> > +	u64	nb_tx_underrun;
> > +	bool	is_tx_stopped;
> > +
> > +	spinlock_t	rx_lock;
> > +	cbd_t __iomem *rxbds;
> > +	cbd_t __iomem *rxbd_free;
> > +	cbd_t __iomem *rxbd_done;
> > +	struct qmc_xfer_desc rx_desc[QMC_NB_RXBDS];
> > +	u64	nb_rx_busy;
> > +	int	rx_pending;
> > +	bool	is_rx_halted;
> > +	bool	is_rx_stopped;
> > +};
> > +
> > +struct qmc {
> > +	struct device *dev;
> > +	struct tsa_serial *tsa_serial;
> > +	void *__iomem scc_regs;
> > +	void *__iomem scc_pram;
> > +	void *__iomem dpram;
> > +	u16 scc_pram_offset;
> > +	cbd_t __iomem *bd_table;
> > +	dma_addr_t bd_dma_addr;
> > +	size_t bd_size;
> > +	u16 __iomem *int_table;
> > +	u16 __iomem *int_curr;
> > +	dma_addr_t int_dma_addr;
> > +	size_t int_size;
> > +	struct list_head chan_head;
> > +	struct qmc_chan *chans[64];
> > +};
> > +
> > +static inline void qmc_write16(void *__iomem addr, u16 val) {
> > +	iowrite16be(val, addr);
> > +}
> > +
> > +static inline u16 qmc_read16(void *__iomem addr) {
> > +	return ioread16be(addr);
> > +}
> > +
> > +static inline void qmc_setbits16(void *__iomem addr, u16 set) {
> > +	qmc_write16(addr, qmc_read16(addr) | set); }
> > +
> > +static inline void qmc_clrbits16(void *__iomem addr, u16 clr) {
> > +	qmc_write16(addr, qmc_read16(addr) & ~clr); }
> > +
> > +static inline void qmc_write32(void *__iomem addr, u32 val) {
> > +	iowrite32be(val, addr);
> > +}
> > +
> > +static inline u32 qmc_read32(void *__iomem addr) {
> > +	return ioread32be(addr);
> > +}
> > +
> > +static inline void qmc_setbits32(void *__iomem addr, u32 set) {
> > +	qmc_write32(addr, qmc_read32(addr) | set); }
> > +
> > +
> > +int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info
> > +*info) {
> > +	struct tsa_serial_info tsa_info;
> > +	int ret;
> > +
> > +	/* Retrieve info from the TSA related serial */
> > +	ret = tsa_serial_get_info(chan->qmc->tsa_serial, &tsa_info);
> > +	if (ret)
> > +		return ret;
> > +
> > +	info->mode = chan->mode;
> > +	info->rx_fs_rate = tsa_info.rx_fs_rate;
> > +	info->rx_bit_rate = tsa_info.rx_bit_rate;
> > +	info->nb_tx_ts = hweight64(chan->tx_ts_mask);
> > +	info->tx_fs_rate = tsa_info.tx_fs_rate;
> > +	info->tx_bit_rate = tsa_info.tx_bit_rate;
> > +	info->nb_rx_ts = hweight64(chan->rx_ts_mask);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_get_info);
> > +
> > +int qmc_chan_set_param(struct qmc_chan *chan, const struct
> > +qmc_chan_param *param) {
> > +	if (param->mode != chan->mode)
> > +		return -EINVAL;
> > +
> > +	switch (param->mode) {
> > +	case QMC_HDLC:
> > +		if ((param->hdlc.max_rx_buf_size % 4) ||
> > +		    (param->hdlc.max_rx_buf_size < 8))
> > +			return -EINVAL;
> > +
> > +		qmc_write16(chan->qmc->scc_pram + QMC_GBL_MRBLR,
> > +			    param->hdlc.max_rx_buf_size - 8);
> > +		qmc_write16(chan->s_param + QMC_SPE_MFLR,
> > +			    param->hdlc.max_rx_frame_size);
> > +		if (param->hdlc.is_crc32) {
> > +			qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> > +				      QMC_SPE_CHAMR_HDLC_CRC);
> > +		} else {
> > +			qmc_clrbits16(chan->s_param + QMC_SPE_CHAMR,
> > +				      QMC_SPE_CHAMR_HDLC_CRC);
> > +		}
> > +		break;
> > +
> > +	case QMC_TRANSPARENT:
> > +		qmc_write16(chan->s_param + QMC_SPE_TMRBLR,
> > +			    param->transp.max_rx_buf_size);
> > +		break;
> > +
> > +	default:
> > +		return -EINVAL;
> > +	}
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_set_param);
> > +
> > +int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> > +			  void (*complete)(void *context), void *context) {
> > +	struct qmc_xfer_desc *xfer_desc;
> > +	unsigned long flags;
> > +	cbd_t *__iomem bd;
> > +	u16 ctrl;
> > +	int ret;
> > +
> > +	/*
> > +	 * R bit  UB bit
> > +	 *   0       0  : The BD is free
> > +	 *   1       1  : The BD is in used, waiting for transfer
> > +	 *   0       1  : The BD is in used, waiting for completion
> > +	 *   1       0  : Should not append
> > +	 */
> > +
> > +	spin_lock_irqsave(&chan->tx_lock, flags);
> > +	bd = chan->txbd_free;
> > +
> > +	ctrl = qmc_read16(&bd->cbd_sc);
> > +	if (ctrl & (QMC_BD_TX_R | QMC_BD_TX_UB)) {
> > +		/* We are full ... */
> > +		ret = -EBUSY;
> > +		goto end;
> > +	}
> > +
> > +	qmc_write16(&bd->cbd_datlen, length);
> > +	qmc_write32(&bd->cbd_bufaddr, addr);
> > +
> > +	xfer_desc = &chan->tx_desc[bd - chan->txbds];
> > +	xfer_desc->tx_complete = complete;
> > +	xfer_desc->context = context;
> > +
> > +	/* Activate the descriptor */
> > +	ctrl |= (QMC_BD_TX_R | QMC_BD_TX_UB);
> > +	wmb(); /* Be sure to flush the descriptor before control update */
> > +	qmc_write16(&bd->cbd_sc, ctrl);
> > +
> > +	if (!chan->is_tx_stopped)
> > +		qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_POL);
> > +
> > +	if (ctrl & QMC_BD_TX_W)
> > +		chan->txbd_free = chan->txbds;
> > +	else
> > +		chan->txbd_free++;
> > +
> > +	ret = 0;
> > +
> > +end:
> > +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_write_submit);
> > +
> > +static void qmc_chan_write_done(struct qmc_chan *chan) {
> > +	struct qmc_xfer_desc *xfer_desc;
> > +	void (*complete)(void *context);
> > +	unsigned long flags;
> > +	void *context;
> > +	cbd_t *__iomem bd;
> > +	u16 ctrl;
> > +
> > +	/*
> > +	 * R bit  UB bit
> > +	 *   0       0  : The BD is free
> > +	 *   1       1  : The BD is in used, waiting for transfer
> > +	 *   0       1  : The BD is in used, waiting for completion
> > +	 *   1       0  : Should not append
> > +	 */
> > +
> > +	spin_lock_irqsave(&chan->tx_lock, flags);
> > +	bd = chan->txbd_done;
> > +
> > +	ctrl = qmc_read16(&bd->cbd_sc);
> > +	while (!(ctrl & QMC_BD_TX_R)) {
> > +		if (!(ctrl & QMC_BD_TX_UB))
> > +			goto end;
> > +
> > +		xfer_desc = &chan->tx_desc[bd - chan->txbds];
> > +		complete = xfer_desc->tx_complete;
> > +		context = xfer_desc->context;
> > +		xfer_desc->tx_complete = NULL;
> > +		xfer_desc->context = NULL;
> > +
> > +		qmc_write16(&bd->cbd_sc, ctrl & ~QMC_BD_TX_UB);
> > +
> > +		if (ctrl & QMC_BD_TX_W)
> > +			chan->txbd_done = chan->txbds;
> > +		else
> > +			chan->txbd_done++;
> > +
> > +		if (complete) {
> > +			spin_unlock_irqrestore(&chan->tx_lock, flags);
> > +			complete(context);
> > +			spin_lock_irqsave(&chan->tx_lock, flags);
> > +		}
> > +
> > +		bd = chan->txbd_done;
> > +		ctrl = qmc_read16(&bd->cbd_sc);
> > +	}
> > +
> > +end:
> > +	spin_unlock_irqrestore(&chan->tx_lock, flags); }
> > +
> > +int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> > +			 void (*complete)(void *context, size_t length), void
> *context) {
> > +	struct qmc_xfer_desc *xfer_desc;
> > +	unsigned long flags;
> > +	cbd_t *__iomem bd;
> > +	u16 ctrl;
> > +	int ret;
> > +
> > +	/*
> > +	 * E bit  UB bit
> > +	 *   0       0  : The BD is free
> > +	 *   1       1  : The BD is in used, waiting for transfer
> > +	 *   0       1  : The BD is in used, waiting for completion
> > +	 *   1       0  : Should not append
> > +	 */
> > +
> > +	spin_lock_irqsave(&chan->rx_lock, flags);
> > +	bd = chan->rxbd_free;
> > +
> > +	ctrl = qmc_read16(&bd->cbd_sc);
> > +	if (ctrl & (QMC_BD_RX_E | QMC_BD_RX_UB)) {
> > +		/* We are full ... */
> > +		ret = -EBUSY;
> > +		goto end;
> > +	}
> > +
> > +	qmc_write16(&bd->cbd_datlen, 0); /* data length is updated by the
> QMC */
> > +	qmc_write32(&bd->cbd_bufaddr, addr);
> > +
> > +	xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> > +	xfer_desc->rx_complete = complete;
> > +	xfer_desc->context = context;
> > +
> > +	/* Activate the descriptor */
> > +	ctrl |= (QMC_BD_RX_E | QMC_BD_RX_UB);
> > +	wmb(); /* Be sure to flush data before descriptor activation */
> > +	qmc_write16(&bd->cbd_sc, ctrl);
> > +
> > +	/* Restart receiver if needed */
> > +	if (chan->is_rx_halted && !chan->is_rx_stopped) {
> > +		/* Restart receiver */
> > +		if (chan->mode == QMC_TRANSPARENT)
> > +			qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x18000080);
> > +		else
> > +			qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x00000080);
> > +		qmc_write32(chan->s_param + QMC_SPE_RSTATE,
> 0x31000000);
> > +		chan->is_rx_halted = false;
> > +	}
> > +	chan->rx_pending++;
> > +
> > +	if (ctrl & QMC_BD_RX_W)
> > +		chan->rxbd_free = chan->rxbds;
> > +	else
> > +		chan->rxbd_free++;
> > +
> > +	ret = 0;
> > +end:
> > +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_read_submit);
> > +
> > +static void qmc_chan_read_done(struct qmc_chan *chan) {
> > +	void (*complete)(void *context, size_t size);
> > +	struct qmc_xfer_desc *xfer_desc;
> > +	unsigned long flags;
> > +	cbd_t *__iomem bd;
> > +	void *context;
> > +	u16 datalen;
> > +	u16 ctrl;
> > +
> > +	/*
> > +	 * E bit  UB bit
> > +	 *   0       0  : The BD is free
> > +	 *   1       1  : The BD is in used, waiting for transfer
> > +	 *   0       1  : The BD is in used, waiting for completion
> > +	 *   1       0  : Should not append
> > +	 */
> > +
> > +	spin_lock_irqsave(&chan->rx_lock, flags);
> > +	bd = chan->rxbd_done;
> > +
> > +	ctrl = qmc_read16(&bd->cbd_sc);
> > +	while (!(ctrl & QMC_BD_RX_E)) {
> > +		if (!(ctrl & QMC_BD_RX_UB))
> > +			goto end;
> > +
> > +		xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> > +		complete = xfer_desc->rx_complete;
> > +		context = xfer_desc->context;
> > +		xfer_desc->rx_complete = NULL;
> > +		xfer_desc->context = NULL;
> > +
> > +		datalen = qmc_read16(&bd->cbd_datlen);
> > +		qmc_write16(&bd->cbd_sc, ctrl & ~QMC_BD_RX_UB);
> > +
> > +		if (ctrl & QMC_BD_RX_W)
> > +			chan->rxbd_done = chan->rxbds;
> > +		else
> > +			chan->rxbd_done++;
> > +
> > +		chan->rx_pending--;
> > +
> > +		if (complete) {
> > +			spin_unlock_irqrestore(&chan->rx_lock, flags);
> > +			complete(context, datalen);
> > +			spin_lock_irqsave(&chan->rx_lock, flags);
> > +		}
> > +
> > +		bd = chan->rxbd_done;
> > +		ctrl = qmc_read16(&bd->cbd_sc);
> > +	}
> > +
> > +end:
> > +	spin_unlock_irqrestore(&chan->rx_lock, flags); }
> > +
> > +static int qmc_chan_command(struct qmc_chan *chan, u8 qmc_opcode) {
> > +	return cpm_command(chan->id << 2, (qmc_opcode << 4) | 0x0E); }
> > +
> > +static int qmc_chan_stop_rx(struct qmc_chan *chan) {
> > +	unsigned long flags;
> > +	int ret;
> > +
> > +	spin_lock_irqsave(&chan->rx_lock, flags);
> > +
> > +	/* Send STOP RECEIVE command */
> > +	ret = qmc_chan_command(chan, 0x0);
> > +	if (ret) {
> > +		dev_err(chan->qmc->dev, "chan %u: Send STOP RECEIVE
> failed (%d)\n",
> > +			chan->id, ret);
> > +		goto end;
> > +	}
> > +
> > +	chan->is_rx_stopped = true;
> > +
> > +end:
> > +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> > +	return ret;
> > +}
> > +
> > +static int qmc_chan_stop_tx(struct qmc_chan *chan) {
> > +	unsigned long flags;
> > +	int ret;
> > +
> > +	spin_lock_irqsave(&chan->tx_lock, flags);
> > +
> > +	/* Send STOP TRANSMIT command */
> > +	ret = qmc_chan_command(chan, 0x1);
> > +	if (ret) {
> > +		dev_err(chan->qmc->dev, "chan %u: Send STOP TRANSMIT
> failed (%d)\n",
> > +			chan->id, ret);
> > +		goto end;
> > +	}
> > +
> > +	chan->is_tx_stopped = true;
> > +
> > +end:
> > +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> > +	return ret;
> > +}
> > +
> > +int qmc_chan_stop(struct qmc_chan *chan, int direction) {
> > +	int ret;
> > +
> > +	if (direction & QMC_CHAN_READ) {
> > +		ret = qmc_chan_stop_rx(chan);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	if (direction & QMC_CHAN_WRITE) {
> > +		ret = qmc_chan_stop_tx(chan);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_stop);
> > +
> > +static void qmc_chan_start_rx(struct qmc_chan *chan) {
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&chan->rx_lock, flags);
> > +
> > +	/* Restart the receiver */
> > +	if (chan->mode == QMC_TRANSPARENT)
> > +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x18000080);
> > +	else
> > +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x00000080);
> > +	qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> > +	chan->is_rx_halted = false;
> > +
> > +	chan->is_rx_stopped = false;
> > +
> > +	spin_unlock_irqrestore(&chan->rx_lock, flags); }
> > +
> > +static void qmc_chan_start_tx(struct qmc_chan *chan) {
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&chan->tx_lock, flags);
> > +
> > +	/*
> > +	 * Enable channel transmitter as it could be disabled if
> > +	 * qmc_chan_reset() was called.
> > +	 */
> > +	qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_ENT);
> > +
> > +	/* Set the POL bit in the channel mode register */
> > +	qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_POL);
> > +
> > +	chan->is_tx_stopped = false;
> > +
> > +	spin_unlock_irqrestore(&chan->tx_lock, flags); }
> > +
> > +int qmc_chan_start(struct qmc_chan *chan, int direction) {
> > +	if (direction & QMC_CHAN_READ)
> > +		qmc_chan_start_rx(chan);
> > +
> > +	if (direction & QMC_CHAN_WRITE)
> > +		qmc_chan_start_tx(chan);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_start);
> > +
> > +static void qmc_chan_reset_rx(struct qmc_chan *chan) {
> > +	struct qmc_xfer_desc *xfer_desc;
> > +	unsigned long flags;
> > +	cbd_t *__iomem bd;
> > +	u16 ctrl;
> > +
> > +	spin_lock_irqsave(&chan->rx_lock, flags);
> > +	bd = chan->rxbds;
> > +	do {
> > +		ctrl = qmc_read16(&bd->cbd_sc);
> > +		qmc_write16(&bd->cbd_sc, ctrl & ~(QMC_BD_RX_UB |
> QMC_BD_RX_E));
> > +
> > +		xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> > +		xfer_desc->rx_complete = NULL;
> > +		xfer_desc->context = NULL;
> > +
> > +		bd++;
> > +	} while (!(ctrl & QMC_BD_RX_W));
> > +
> > +	chan->rxbd_free = chan->rxbds;
> > +	chan->rxbd_done = chan->rxbds;
> > +	qmc_write16(chan->s_param + QMC_SPE_RBPTR,
> > +		    qmc_read16(chan->s_param + QMC_SPE_RBASE));
> > +
> > +	chan->rx_pending = 0;
> > +	chan->is_rx_stopped = false;
> > +
> > +	spin_unlock_irqrestore(&chan->rx_lock, flags); }
> > +
> > +static void qmc_chan_reset_tx(struct qmc_chan *chan) {
> > +	struct qmc_xfer_desc *xfer_desc;
> > +	unsigned long flags;
> > +	cbd_t *__iomem bd;
> > +	u16 ctrl;
> > +
> > +	spin_lock_irqsave(&chan->tx_lock, flags);
> > +
> > +	/* Disable transmitter. It will be re-enable on qmc_chan_start() */
> > +	qmc_clrbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_ENT);
> > +
> > +	bd = chan->txbds;
> > +	do {
> > +		ctrl = qmc_read16(&bd->cbd_sc);
> > +		qmc_write16(&bd->cbd_sc, ctrl & ~(QMC_BD_TX_UB |
> QMC_BD_TX_R));
> > +
> > +		xfer_desc = &chan->tx_desc[bd - chan->txbds];
> > +		xfer_desc->tx_complete = NULL;
> > +		xfer_desc->context = NULL;
> > +
> > +		bd++;
> > +	} while (!(ctrl & QMC_BD_TX_W));
> > +
> > +	chan->txbd_free = chan->txbds;
> > +	chan->txbd_done = chan->txbds;
> > +	qmc_write16(chan->s_param + QMC_SPE_TBPTR,
> > +		    qmc_read16(chan->s_param + QMC_SPE_TBASE));
> > +
> > +	/* Reset TSTATE and ZISTATE to their initial value */
> > +	qmc_write32(chan->s_param + QMC_SPE_TSTATE, 0x30000000);
> > +	qmc_write32(chan->s_param + QMC_SPE_ZISTATE, 0x00000100);
> > +
> > +	spin_unlock_irqrestore(&chan->tx_lock, flags); }
> > +
> > +int qmc_chan_reset(struct qmc_chan *chan, int direction) {
> > +	if (direction & QMC_CHAN_READ)
> > +		qmc_chan_reset_rx(chan);
> > +
> > +	if (direction & QMC_CHAN_WRITE)
> > +		qmc_chan_reset_tx(chan);
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_reset);
> > +
> > +static int qmc_check_chans(struct qmc *qmc) {
> > +	struct tsa_serial_info info;
> > +	bool is_one_table = false;
> > +	struct qmc_chan *chan;
> > +	u64 tx_ts_mask = 0;
> > +	u64 rx_ts_mask = 0;
> > +	u64 tx_ts_assigned_mask;
> > +	u64 rx_ts_assigned_mask;
> > +	int ret;
> > +
> > +	/* Retrieve info from the TSA related serial */
> > +	ret = tsa_serial_get_info(qmc->tsa_serial, &info);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/*
> > +	 * If more than 32 TS are assigned to this serial, one common table is
> > +	 * used for Tx and Rx and so masks must be equal for all channels.
> > +	 */
> > +	if ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) {
> > +		if (info.nb_tx_ts != info.nb_rx_ts) {
> > +			dev_err(qmc->dev, "Number of TSA Tx/Rx TS
> assigned are not equal\n");
> > +			return -EINVAL;
> > +		}
> > +		is_one_table = true;
> > +	}
> > +
> > +
> > +	tx_ts_assigned_mask = (((u64)1) << info.nb_tx_ts) - 1;
> > +	rx_ts_assigned_mask = (((u64)1) << info.nb_rx_ts) - 1;
> > +
> > +	list_for_each_entry(chan, &qmc->chan_head, list) {
> > +		if (chan->tx_ts_mask > tx_ts_assigned_mask) {
> > +			dev_err(qmc->dev, "chan %u uses TSA unassigned Tx
> TS\n", chan->id);
> > +			return -EINVAL;
> > +		}
> > +		if (tx_ts_mask & chan->tx_ts_mask) {
> > +			dev_err(qmc->dev, "chan %u uses an already used
> Tx TS\n", chan->id);
> > +			return -EINVAL;
> > +		}
> > +
> > +		if (chan->rx_ts_mask > rx_ts_assigned_mask) {
> > +			dev_err(qmc->dev, "chan %u uses TSA unassigned
> Rx TS\n", chan->id);
> > +			return -EINVAL;
> > +		}
> > +		if (rx_ts_mask & chan->rx_ts_mask) {
> > +			dev_err(qmc->dev, "chan %u uses an already used
> Rx TS\n", chan->id);
> > +			return -EINVAL;
> > +		}
> > +
> > +		if (is_one_table && (chan->tx_ts_mask != chan-
> >rx_ts_mask)) {
> > +			dev_err(qmc->dev, "chan %u uses different Rx and
> Tx TS\n", chan->id);
> > +			return -EINVAL;
> > +		}
> > +
> > +		tx_ts_mask |= chan->tx_ts_mask;
> > +		rx_ts_mask |= chan->rx_ts_mask;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static unsigned int qmc_nb_chans(struct qmc *qmc) {
> > +	unsigned int count = 0;
> > +	struct qmc_chan *chan;
> > +
> > +	list_for_each_entry(chan, &qmc->chan_head, list)
> > +		count++;
> > +
> > +	return count;
> > +}
> > +
> > +static int qmc_of_parse_chans(struct qmc *qmc, struct device_node
> > +*np) {
> > +	struct device_node *chan_np;
> > +	struct qmc_chan *chan;
> > +	const char *mode;
> > +	u32 chan_id;
> > +	u64 ts_mask;
> > +	int ret;
> > +
> > +	for_each_available_child_of_node(np, chan_np) {
> > +		ret = of_property_read_u32(chan_np, "reg", &chan_id);
> > +		if (ret) {
> > +			dev_err(qmc->dev, "%pOF: failed to read reg\n",
> chan_np);
> > +			of_node_put(chan_np);
> > +			return ret;
> > +		}
> > +		if (chan_id > 63) {
> > +			dev_err(qmc->dev, "%pOF: Invalid chan_id\n",
> chan_np);
> > +			of_node_put(chan_np);
> > +			return -EINVAL;
> > +		}
> > +
> > +		chan = devm_kzalloc(qmc->dev, sizeof(*chan), GFP_KERNEL);
> > +		if (!chan) {
> > +			of_node_put(chan_np);
> > +			return -ENOMEM;
> > +		}
> > +
> > +		chan->id = chan_id;
> > +		spin_lock_init(&chan->rx_lock);
> > +		spin_lock_init(&chan->tx_lock);
> > +
> > +		ret = of_property_read_u64(chan_np, "fsl,tx-ts-mask",
> &ts_mask);
> > +		if (ret) {
> > +			dev_err(qmc->dev, "%pOF: failed to read fsl,tx-ts-
> mask\n",
> > +				chan_np);
> > +			of_node_put(chan_np);
> > +			return ret;
> > +		}
> > +		chan->tx_ts_mask = ts_mask;
> > +
> > +		ret = of_property_read_u64(chan_np, "fsl,rx-ts-mask",
> &ts_mask);
> > +		if (ret) {
> > +			dev_err(qmc->dev, "%pOF: failed to read fsl,rx-ts-
> mask\n",
> > +				chan_np);
> > +			of_node_put(chan_np);
> > +			return ret;
> > +		}
> > +		chan->rx_ts_mask = ts_mask;
> > +
> > +		mode = "transparent";
> > +		ret = of_property_read_string(chan_np, "fsl,operational-
> mode", &mode);
> > +		if (ret && ret != -EINVAL) {
> > +			dev_err(qmc->dev, "%pOF: failed to read
> fsl,operational-mode\n",
> > +				chan_np);
> > +			of_node_put(chan_np);
> > +			return ret;
> > +		}
> > +		if (!strcmp(mode, "transparent")) {
> > +			chan->mode = QMC_TRANSPARENT;
> > +		} else if (!strcmp(mode, "hdlc")) {
> > +			chan->mode = QMC_HDLC;
> > +		} else {
> > +			dev_err(qmc->dev, "%pOF: Invalid fsl,operational-
> mode (%s)\n",
> > +				chan_np, mode);
> > +			of_node_put(chan_np);
> > +			return -EINVAL;
> > +		}
> > +
> > +		chan->is_reverse_data = of_property_read_bool(chan_np,
> > +							      "fsl,reverse-data");
> > +
> > +		list_add_tail(&chan->list, &qmc->chan_head);
> > +		qmc->chans[chan->id] = chan;
> > +	}
> > +
> > +	return qmc_check_chans(qmc);
> > +}
> > +
> > +static int qmc_setup_tsa_64rxtx(struct qmc *qmc, const struct
> > +tsa_serial_info *info) {
> > +	struct qmc_chan *chan;
> > +	unsigned int i;
> > +	u16 val;
> > +
> > +	/*
> > +	 * Use a common Tx/Rx 64 entries table.
> > +	 * Everything was previously checked, Tx and Rx related stuffs are
> > +	 * identical -> Used Rx related stuff to build the table
> > +	 */
> > +
> > +	/* Invalidate all entries */
> > +	for (i = 0; i < 64; i++)
> > +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2),
> 0x0000);
> > +
> > +	/* Set entries based on Rx stuff*/
> > +	list_for_each_entry(chan, &qmc->chan_head, list) {
> > +		for (i = 0; i < info->nb_rx_ts; i++) {
> > +			if (!(chan->rx_ts_mask & (((u64)1) << i)))
> > +				continue;
> > +
> > +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> > +			      QMC_TSA_CHANNEL(chan->id);
> > +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX +
> (i * 2), val);
> > +		}
> > +	}
> > +
> > +	/* Set Wrap bit on last entry */
> > +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info-
> >nb_rx_ts - 1) * 2),
> > +		      QMC_TSA_WRAP);
> > +
> > +	/* Init pointers to the table */
> > +	val = qmc->scc_pram_offset + QMC_GBL_TSATRX;
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_RX_S_PTR, val);
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_RXPTR, val);
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_TX_S_PTR, val);
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_TXPTR, val);
> > +
> > +	return 0;
> > +}
> > +
> > +static int qmc_setup_tsa_32rx_32tx(struct qmc *qmc, const struct
> > +tsa_serial_info *info) {
> > +	struct qmc_chan *chan;
> > +	unsigned int i;
> > +	u16 val;
> > +
> > +	/*
> > +	 * Use a Tx 32 entries table and a Rx 32 entries table.
> > +	 * Everything was previously checked.
> > +	 */
> > +
> > +	/* Invalidate all entries */
> > +	for (i = 0; i < 32; i++) {
> > +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2),
> 0x0000);
> > +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX + (i * 2),
> 0x0000);
> > +	}
> > +
> > +	/* Set entries based on Rx and Tx stuff*/
> > +	list_for_each_entry(chan, &qmc->chan_head, list) {
> > +		/* Rx part */
> > +		for (i = 0; i < info->nb_rx_ts; i++) {
> > +			if (!(chan->rx_ts_mask & (((u64)1) << i)))
> > +				continue;
> > +
> > +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> > +			      QMC_TSA_CHANNEL(chan->id);
> > +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX +
> (i * 2), val);
> > +		}
> > +		/* Tx part */
> > +		for (i = 0; i < info->nb_tx_ts; i++) {
> > +			if (!(chan->tx_ts_mask & (((u64)1) << i)))
> > +				continue;
> > +
> > +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> > +			      QMC_TSA_CHANNEL(chan->id);
> > +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX +
> (i * 2), val);
> > +		}
> > +	}
> > +
> > +	/* Set Wrap bit on last entries */
> > +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info-
> >nb_rx_ts - 1) * 2),
> > +		      QMC_TSA_WRAP);
> > +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATTX + ((info-
> >nb_tx_ts - 1) * 2),
> > +		      QMC_TSA_WRAP);
> > +
> > +	/* Init Rx pointers ...*/
> > +	val = qmc->scc_pram_offset + QMC_GBL_TSATRX;
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_RX_S_PTR, val);
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_RXPTR, val);
> > +
> > +	/* ... and Tx pointers */
> > +	val = qmc->scc_pram_offset + QMC_GBL_TSATTX;
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_TX_S_PTR, val);
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_TXPTR, val);
> > +
> > +	return 0;
> > +}
> > +
> > +static int qmc_setup_tsa(struct qmc *qmc) {
> > +	struct tsa_serial_info info;
> > +	int ret;
> > +
> > +	/* Retrieve info from the TSA related serial */
> > +	ret = tsa_serial_get_info(qmc->tsa_serial, &info);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/*
> > +	 * Setup one common 64 entries table or two 32 entries (one for Tx
> and
> > +	 * one for Tx) according to assigned TS numbers.
> > +	 */
> > +	return ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) ?
> > +		qmc_setup_tsa_64rxtx(qmc, &info) :
> > +		qmc_setup_tsa_32rx_32tx(qmc, &info); }
> > +
> > +static int qmc_setup_chan_trnsync(struct qmc *qmc, struct qmc_chan
> > +*chan) {
> > +	struct tsa_serial_info info;
> > +	u16 first_rx, last_tx;
> > +	u16 trnsync;
> > +	int ret;
> > +
> > +	/* Retrieve info from the TSA related serial */
> > +	ret = tsa_serial_get_info(chan->qmc->tsa_serial, &info);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/* Find the first Rx TS allocated to the channel */
> > +	first_rx = chan->rx_ts_mask ? __ffs64(chan->rx_ts_mask) + 1 : 0;
> > +
> > +	/* Find the last Tx TS allocated to the channel */
> > +	last_tx = fls64(chan->tx_ts_mask);
> > +
> > +	trnsync = 0;
> > +	if (info.nb_rx_ts)
> > +		trnsync |= QMC_SPE_TRNSYNC_RX((first_rx % info.nb_rx_ts)
> * 2);
> > +	if (info.nb_tx_ts)
> > +		trnsync |= QMC_SPE_TRNSYNC_TX((last_tx % info.nb_tx_ts)
> * 2);
> > +
> > +	qmc_write16(chan->s_param + QMC_SPE_TRNSYNC, trnsync);
> > +
> > +	dev_dbg(qmc->dev, "chan %u: trnsync=0x%04x, rx %u/%u 0x%llx,
> tx %u/%u 0x%llx\n",
> > +		chan->id, trnsync,
> > +		first_rx, info.nb_rx_ts, chan->rx_ts_mask,
> > +		last_tx, info.nb_tx_ts, chan->tx_ts_mask);
> > +
> > +	return 0;
> > +}
> > +
> > +static int qmc_setup_chan(struct qmc *qmc, struct qmc_chan *chan) {
> > +	unsigned int i;
> > +	cbd_t __iomem *bd;
> > +	int ret;
> > +	u16 val;
> > +
> > +	chan->qmc = qmc;
> > +
> > +	/* Set channel specific parameter base address */
> > +	chan->s_param = qmc->dpram + (chan->id * 64);
> > +	/* 16 bd per channel (8 rx and 8 tx) */
> > +	chan->txbds = qmc->bd_table + (chan->id * (QMC_NB_TXBDS +
> QMC_NB_RXBDS));
> > +	chan->rxbds = qmc->bd_table + (chan->id * (QMC_NB_TXBDS +
> > +QMC_NB_RXBDS)) + QMC_NB_TXBDS;
> > +
> > +	chan->txbd_free = chan->txbds;
> > +	chan->txbd_done = chan->txbds;
> > +	chan->rxbd_free = chan->rxbds;
> > +	chan->rxbd_done = chan->rxbds;
> > +
> > +	/* TBASE and TBPTR*/
> > +	val = chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS) *
> sizeof(cbd_t);
> > +	qmc_write16(chan->s_param + QMC_SPE_TBASE, val);
> > +	qmc_write16(chan->s_param + QMC_SPE_TBPTR, val);
> > +
> > +	/* RBASE and RBPTR*/
> > +	val = ((chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS)) +
> QMC_NB_TXBDS) * sizeof(cbd_t);
> > +	qmc_write16(chan->s_param + QMC_SPE_RBASE, val);
> > +	qmc_write16(chan->s_param + QMC_SPE_RBPTR, val);
> > +	qmc_write32(chan->s_param + QMC_SPE_TSTATE, 0x30000000);
> > +	qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> > +	qmc_write32(chan->s_param + QMC_SPE_ZISTATE, 0x00000100);
> > +	if (chan->mode == QMC_TRANSPARENT) {
> > +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x18000080);
> > +		qmc_write16(chan->s_param + QMC_SPE_TMRBLR, 60);
> > +		val = QMC_SPE_CHAMR_MODE_TRANSP |
> QMC_SPE_CHAMR_TRANSP_SYNC;
> > +		if (chan->is_reverse_data)
> > +			val |= QMC_SPE_CHAMR_TRANSP_RD;
> > +		qmc_write16(chan->s_param + QMC_SPE_CHAMR, val);
> > +		ret = qmc_setup_chan_trnsync(qmc, chan);
> > +		if (ret)
> > +			return ret;
> > +	} else {
> > +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x00000080);
> > +		qmc_write16(chan->s_param + QMC_SPE_MFLR, 60);
> > +		qmc_write16(chan->s_param + QMC_SPE_CHAMR,
> > +			QMC_SPE_CHAMR_MODE_HDLC |
> QMC_SPE_CHAMR_HDLC_IDLM);
> > +	}
> > +
> > +	/* Do not enable interrupts now. They will be enabled later */
> > +	qmc_write16(chan->s_param + QMC_SPE_INTMSK, 0x0000);
> > +
> > +	/* Init Rx BDs and set Wrap bit on last descriptor */
> > +	BUILD_BUG_ON(QMC_NB_RXBDS == 0);
> > +	val = QMC_BD_RX_I;
> > +	for (i = 0; i < QMC_NB_RXBDS; i++) {
> > +		bd = chan->rxbds + i;
> > +		qmc_write16(&bd->cbd_sc, val);
> > +	}
> > +	bd = chan->rxbds + QMC_NB_RXBDS - 1;
> > +	qmc_write16(&bd->cbd_sc, val | QMC_BD_RX_W);
> > +
> > +	/* Init Tx BDs and set Wrap bit on last descriptor */
> > +	BUILD_BUG_ON(QMC_NB_TXBDS == 0);
> > +	val = QMC_BD_TX_I;
> > +	if (chan->mode == QMC_HDLC)
> > +		val |= QMC_BD_TX_L | QMC_BD_TX_TC;
> > +	for (i = 0; i < QMC_NB_TXBDS; i++) {
> > +		bd = chan->txbds + i;
> > +		qmc_write16(&bd->cbd_sc, val);
> > +	}
> > +	bd = chan->txbds + QMC_NB_TXBDS - 1;
> > +	qmc_write16(&bd->cbd_sc, val | QMC_BD_TX_W);
> > +
> > +	return 0;
> > +}
> > +
> > +static int qmc_setup_chans(struct qmc *qmc) {
> > +	struct qmc_chan *chan;
> > +	int ret;
> > +
> > +	list_for_each_entry(chan, &qmc->chan_head, list) {
> > +		ret = qmc_setup_chan(qmc, chan);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int qmc_finalize_chans(struct qmc *qmc) {
> > +	struct qmc_chan *chan;
> > +	int ret;
> > +
> > +	list_for_each_entry(chan, &qmc->chan_head, list) {
> > +		/* Unmask channel interrupts */
> > +		if (chan->mode == QMC_HDLC) {
> > +			qmc_write16(chan->s_param + QMC_SPE_INTMSK,
> > +				    QMC_INT_NID | QMC_INT_IDL |
> QMC_INT_MRF |
> > +				    QMC_INT_UN | QMC_INT_RXF |
> QMC_INT_BSY |
> > +				    QMC_INT_TXB | QMC_INT_RXB);
> > +		} else {
> > +			qmc_write16(chan->s_param + QMC_SPE_INTMSK,
> > +				    QMC_INT_UN | QMC_INT_BSY |
> > +				    QMC_INT_TXB | QMC_INT_RXB);
> > +		}
> > +
> > +		/* Forced stop the channel */
> > +		ret = qmc_chan_stop(chan, QMC_CHAN_ALL);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int qmc_setup_ints(struct qmc *qmc) {
> > +	unsigned int i;
> > +	u16 __iomem *last;
> > +
> > +	/* Raz all entries */
> > +	for (i = 0; i < (qmc->int_size / sizeof(u16)); i++)
> > +		qmc_write16(qmc->int_table + i, 0x0000);
> > +
> > +	/* Set Wrap bit on last entry */
> > +	if (qmc->int_size >= sizeof(u16)) {
> > +		last = qmc->int_table + (qmc->int_size / sizeof(u16)) - 1;
> > +		qmc_write16(last, QMC_INT_W);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static void qmc_irq_gint(struct qmc *qmc) {
> > +	struct qmc_chan *chan;
> > +	unsigned int chan_id;
> > +	unsigned long flags;
> > +	u16 int_entry;
> > +
> > +	int_entry = qmc_read16(qmc->int_curr);
> > +	while (int_entry & QMC_INT_V) {
> > +		/* Clear all but the Wrap bit */
> > +		qmc_write16(qmc->int_curr, int_entry & QMC_INT_W);
> > +
> > +		chan_id = QMC_INT_GET_CHANNEL(int_entry);
> > +		chan = qmc->chans[chan_id];
> > +		if (!chan) {
> > +			dev_err(qmc->dev, "interrupt on invalid chan %u\n",
> chan_id);
> > +			goto int_next;
> > +		}
> > +
> > +		if (int_entry & QMC_INT_TXB)
> > +			qmc_chan_write_done(chan);
> > +
> > +		if (int_entry & QMC_INT_UN) {
> > +			dev_info(qmc->dev, "intr chan %u, 0x%04x (UN)\n",
> chan_id,
> > +				 int_entry);
> > +			chan->nb_tx_underrun++;
> > +		}
> > +
> > +		if (int_entry & QMC_INT_BSY) {
> > +			dev_info(qmc->dev, "intr chan %u, 0x%04x (BSY)\n",
> chan_id,
> > +				 int_entry);
> > +			chan->nb_rx_busy++;
> > +			/* Restart the receiver if needed */
> > +			spin_lock_irqsave(&chan->rx_lock, flags);
> > +			if (chan->rx_pending && !chan->is_rx_stopped) {
> > +				if (chan->mode == QMC_TRANSPARENT)
> > +					qmc_write32(chan->s_param +
> QMC_SPE_ZDSTATE, 0x18000080);
> > +				else
> > +					qmc_write32(chan->s_param +
> QMC_SPE_ZDSTATE, 0x00000080);
> > +				qmc_write32(chan->s_param +
> QMC_SPE_RSTATE, 0x31000000);
> > +				chan->is_rx_halted = false;
> > +			} else {
> > +				chan->is_rx_halted = true;
> > +			}
> > +			spin_unlock_irqrestore(&chan->rx_lock, flags);
> > +		}
> > +
> > +		if (int_entry & QMC_INT_RXB)
> > +			qmc_chan_read_done(chan);
> > +
> > +int_next:
> > +		if (int_entry & QMC_INT_W)
> > +			qmc->int_curr = qmc->int_table;
> > +		else
> > +			qmc->int_curr++;
> > +		int_entry = qmc_read16(qmc->int_curr);
> > +	}
> > +}
> > +
> > +static irqreturn_t qmc_irq_handler(int irq, void *priv) {
> > +	struct qmc *qmc = (struct qmc *)priv;
> > +	u16 scce;
> > +
> > +	scce = qmc_read16(qmc->scc_regs + SCC_SCCE);
> > +	qmc_write16(qmc->scc_regs + SCC_SCCE, scce);
> > +
> > +	if (unlikely(scce & SCC_SCCE_IQOV))
> > +		dev_info(qmc->dev, "IRQ queue overflow\n");
> > +
> > +	if (unlikely(scce & SCC_SCCE_GUN))
> > +		dev_err(qmc->dev, "Global transmitter underrun\n");
> > +
> > +	if (unlikely(scce & SCC_SCCE_GOV))
> > +		dev_err(qmc->dev, "Global receiver overrun\n");
> > +
> > +	/* normal interrupt */
> > +	if (likely(scce & SCC_SCCE_GINT))
> > +		qmc_irq_gint(qmc);
> > +
> > +	return IRQ_HANDLED;
> > +}
> > +
> > +static int qmc_probe(struct platform_device *pdev) {
> > +	struct device_node *np = pdev->dev.of_node;
> > +	unsigned int nb_chans;
> > +	struct resource *res;
> > +	struct qmc *qmc;
> > +	int irq;
> > +	int ret;
> > +
> > +	qmc = devm_kzalloc(&pdev->dev, sizeof(*qmc), GFP_KERNEL);
> > +	if (!qmc)
> > +		return -ENOMEM;
> > +
> > +	qmc->dev = &pdev->dev;
> > +	INIT_LIST_HEAD(&qmc->chan_head);
> > +
> > +	qmc->scc_regs = devm_platform_ioremap_resource_byname(pdev,
> "scc_regs");
> > +	if (IS_ERR(qmc->scc_regs))
> > +		return PTR_ERR(qmc->scc_regs);
> > +
> > +
> > +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> "scc_pram");
> > +	if (!res)
> > +		return -EINVAL;
> > +	qmc->scc_pram_offset = res->start - get_immrbase();
> > +	qmc->scc_pram = devm_ioremap_resource(qmc->dev, res);
> > +	if (IS_ERR(qmc->scc_pram))
> > +		return PTR_ERR(qmc->scc_pram);
> > +
> > +	qmc->dpram  = devm_platform_ioremap_resource_byname(pdev,
> "dpram");
> > +	if (IS_ERR(qmc->dpram))
> > +		return PTR_ERR(qmc->dpram);
> > +
> > +	qmc->tsa_serial = devm_tsa_serial_get_byphandle(qmc->dev, np,
> "fsl,tsa-serial");
> > +	if (IS_ERR(qmc->tsa_serial)) {
> > +		return dev_err_probe(qmc->dev, PTR_ERR(qmc->tsa_serial),
> > +				     "Failed to get TSA serial\n");
> > +	}
> > +
> > +	/* Connect the serial (SCC) to TSA */
> > +	ret = tsa_serial_connect(qmc->tsa_serial);
> > +	if (ret) {
> > +		dev_err(qmc->dev, "Failed to connect TSA serial\n");
> > +		return ret;
> > +	}
> > +
> > +	/* Parse channels informationss */
> > +	ret = qmc_of_parse_chans(qmc, np);
> > +	if (ret)
> > +		goto err_tsa_serial_disconnect;
> > +
> > +	nb_chans = qmc_nb_chans(qmc);
> > +
> > +	/* Init GMSR_H and GMSR_L registers */
> > +	qmc_write32(qmc->scc_regs + SCC_GSMRH,
> > +		    SCC_GSMRH_CDS | SCC_GSMRH_CTSS | SCC_GSMRH_CDP
> |
> > +SCC_GSMRH_CTSP);
> > +
> > +	/* enable QMC mode */
> > +	qmc_write32(qmc->scc_regs + SCC_GSMRL,
> SCC_GSMRL_MODE_QMC);
> > +
> > +	/*
> > +	 * Allocate the buffer descriptor table
> > +	 * 8 rx and 8 tx descriptors per channel
> > +	 */
> > +	qmc->bd_size = (nb_chans * (QMC_NB_TXBDS + QMC_NB_RXBDS))
> * sizeof(cbd_t);
> > +	qmc->bd_table = dmam_alloc_coherent(qmc->dev, qmc->bd_size,
> > +		&qmc->bd_dma_addr, GFP_KERNEL);
> > +	if (!qmc->bd_table) {
> > +		dev_err(qmc->dev, "Failed to allocate bd table\n");
> > +		ret = -ENOMEM;
> > +		goto err_tsa_serial_disconnect;
> > +	}
> > +	memset(qmc->bd_table, 0, qmc->bd_size);
> > +
> > +	qmc_write32(qmc->scc_pram + QMC_GBL_MCBASE, qmc-
> >bd_dma_addr);
> > +
> > +	/* Allocate the interrupt table */
> > +	qmc->int_size = QMC_NB_INTS * sizeof(u16);
> > +	qmc->int_table = dmam_alloc_coherent(qmc->dev, qmc->int_size,
> > +		&qmc->int_dma_addr, GFP_KERNEL);
> > +	if (!qmc->int_table) {
> > +		dev_err(qmc->dev, "Failed to allocate interrupt table\n");
> > +		ret = -ENOMEM;
> > +		goto err_tsa_serial_disconnect;
> > +	}
> > +	memset(qmc->int_table, 0, qmc->int_size);
> > +
> > +	qmc->int_curr = qmc->int_table;
> > +	qmc_write32(qmc->scc_pram + QMC_GBL_INTBASE, qmc-
> >int_dma_addr);
> > +	qmc_write32(qmc->scc_pram + QMC_GBL_INTPTR, qmc-
> >int_dma_addr);
> > +
> > +	/* Set MRBLR (valid for HDLC only) max MRU + max CRC */
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_MRBLR,
> HDLC_MAX_MRU + 4);
> > +
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_GRFTHR, 1);
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_GRFCNT, 1);
> > +
> > +	qmc_write32(qmc->scc_pram + QMC_GBL_C_MASK32, 0xDEBB20E3);
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_C_MASK16, 0xF0B8);
> > +
> > +	ret = qmc_setup_tsa(qmc);
> > +	if (ret)
> > +		goto err_tsa_serial_disconnect;
> > +
> > +	qmc_write16(qmc->scc_pram + QMC_GBL_QMCSTATE, 0x8000);
> > +
> > +	ret = qmc_setup_chans(qmc);
> > +	if (ret)
> > +		goto err_tsa_serial_disconnect;
> > +
> > +	/* Init interrupts table */
> > +	ret = qmc_setup_ints(qmc);
> > +	if (ret)
> > +		goto err_tsa_serial_disconnect;
> > +
> > +	/* Disable and clear interrupts,  set the irq handler */
> > +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0x0000);
> > +	qmc_write16(qmc->scc_regs + SCC_SCCE, 0x000F);
> > +	irq = platform_get_irq(pdev, 0);
> > +	if (irq < 0)
> > +		goto err_tsa_serial_disconnect;
> > +	ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc",
> qmc);
> > +	if (ret < 0)
> > +		goto err_tsa_serial_disconnect;
> > +
> > +	/* Enable interrupts */
> > +	qmc_write16(qmc->scc_regs + SCC_SCCM,
> > +		SCC_SCCE_IQOV | SCC_SCCE_GINT | SCC_SCCE_GUN |
> SCC_SCCE_GOV);
> > +
> > +	ret = qmc_finalize_chans(qmc);
> > +	if (ret < 0)
> > +		goto err_disable_intr;
> > +
> > +	/* Enable transmiter and receiver */
> > +	qmc_setbits32(qmc->scc_regs + SCC_GSMRL, SCC_GSMRL_ENR |
> > +SCC_GSMRL_ENT);
> > +
> > +	platform_set_drvdata(pdev, qmc);
> > +
> > +	return 0;
> > +
> > +err_disable_intr:
> > +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0);
> > +
> > +err_tsa_serial_disconnect:
> > +	tsa_serial_disconnect(qmc->tsa_serial);
> > +	return ret;
> > +}
> > +
> > +static int qmc_remove(struct platform_device *pdev) {
> > +	struct qmc *qmc = platform_get_drvdata(pdev);
> > +
> > +	/* Disable transmiter and receiver */
> > +	qmc_setbits32(qmc->scc_regs + SCC_GSMRL, 0);
> > +
> > +	/* Disable interrupts */
> > +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0);
> > +
> > +	/* Disconnect the serial from TSA */
> > +	tsa_serial_disconnect(qmc->tsa_serial);
> > +
> > +	return 0;
> > +}
> > +
> > +static const struct of_device_id qmc_id_table[] = {
> > +	{ .compatible = "fsl,cpm1-scc-qmc" },
> > +	{} /* sentinel */
> > +};
> > +MODULE_DEVICE_TABLE(of, qmc_id_table);
> > +
> > +static struct platform_driver qmc_driver = {
> > +	.driver = {
> > +		.name = "fsl-qmc",
> > +		.of_match_table = of_match_ptr(qmc_id_table),
> > +	},
> > +	.probe = qmc_probe,
> > +	.remove = qmc_remove,
> > +};
> > +module_platform_driver(qmc_driver);
> > +
> > +struct qmc_chan *qmc_chan_get_byphandle(struct device_node *np,
> const
> > +char *phandle_name) {
> > +	struct of_phandle_args out_args;
> > +	struct platform_device *pdev;
> > +	struct qmc_chan *qmc_chan;
> > +	struct qmc *qmc;
> > +	int ret;
> > +
> > +	ret = of_parse_phandle_with_fixed_args(np, phandle_name, 1, 0,
> > +					       &out_args);
> > +	if (ret < 0)
> > +		return ERR_PTR(ret);
> > +
> > +	if (!of_match_node(qmc_driver.driver.of_match_table,
> out_args.np)) {
> > +		of_node_put(out_args.np);
> > +		return ERR_PTR(-EINVAL);
> > +	}
> > +
> > +	pdev = of_find_device_by_node(out_args.np);
> > +	of_node_put(out_args.np);
> > +	if (!pdev)
> > +		return ERR_PTR(-ENODEV);
> > +
> > +	qmc = platform_get_drvdata(pdev);
> > +	if (!qmc) {
> > +		platform_device_put(pdev);
> > +		return ERR_PTR(-EPROBE_DEFER);
> > +	}
> > +
> > +	if (out_args.args_count != 1) {
> > +		platform_device_put(pdev);
> > +		return ERR_PTR(-EINVAL);
> > +	}
> > +
> > +	if (out_args.args[0] >= ARRAY_SIZE(qmc->chans)) {
> > +		platform_device_put(pdev);
> > +		return ERR_PTR(-EINVAL);
> > +	}
> > +
> > +	qmc_chan = qmc->chans[out_args.args[0]];
> > +	if (!qmc_chan) {
> > +		platform_device_put(pdev);
> > +		return ERR_PTR(-ENOENT);
> > +	}
> > +
> > +	return qmc_chan;
> > +}
> > +EXPORT_SYMBOL(qmc_chan_get_byphandle);
> > +
> > +void qmc_chan_put(struct qmc_chan *chan) {
> > +	put_device(chan->qmc->dev);
> > +}
> > +EXPORT_SYMBOL(qmc_chan_put);
> > +
> > +static void devm_qmc_chan_release(struct device *dev, void *res) {
> > +	struct qmc_chan **qmc_chan = res;
> > +
> > +	qmc_chan_put(*qmc_chan);
> > +}
> > +
> > +struct qmc_chan *devm_qmc_chan_get_byphandle(struct device *dev,
> > +					     struct device_node *np,
> > +					     const char *phandle_name)
> > +{
> > +	struct qmc_chan *qmc_chan;
> > +	struct qmc_chan **dr;
> > +
> > +	dr = devres_alloc(devm_qmc_chan_release, sizeof(*dr),
> GFP_KERNEL);
> > +	if (!dr)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	qmc_chan = qmc_chan_get_byphandle(np, phandle_name);
> > +	if (!IS_ERR(qmc_chan)) {
> > +		*dr = qmc_chan;
> > +		devres_add(dev, dr);
> > +	} else {
> > +		devres_free(dr);
> > +	}
> > +
> > +	return qmc_chan;
> > +}
> > +EXPORT_SYMBOL(devm_qmc_chan_get_byphandle);
> > +
> > +MODULE_AUTHOR("Herve Codina <herve.codina@bootlin.com>");
> > +MODULE_DESCRIPTION("CPM QMC driver"); MODULE_LICENSE("GPL");
> > diff --git a/include/soc/fsl/qe/qmc.h b/include/soc/fsl/qe/qmc.h new
> > file mode 100644 index 000000000000..3c61a50d2ae2
> > --- /dev/null
> > +++ b/include/soc/fsl/qe/qmc.h
> > @@ -0,0 +1,71 @@
> > +/* SPDX-License-Identifier: GPL-2.0-or-later */
> > +/*
> > + * QMC management
> > + *
> > + * Copyright 2022 CS GROUP France
> > + *
> > + * Author: Herve Codina <herve.codina@bootlin.com>  */ #ifndef
> > +__SOC_FSL_QMC_H__ #define __SOC_FSL_QMC_H__
> > +
> > +#include <linux/types.h>
> > +
> > +struct device_node;
> > +struct device;
> > +struct qmc_chan;
> > +
> > +struct qmc_chan *qmc_chan_get_byphandle(struct device_node *np,
> const
> > +char *phandle_name); void qmc_chan_put(struct qmc_chan *chan);
> struct
> > +qmc_chan *devm_qmc_chan_get_byphandle(struct device *dev, struct
> device_node *np,
> > +					     const char *phandle_name);
> > +
> > +enum qmc_mode {
> > +	QMC_TRANSPARENT,
> > +	QMC_HDLC,
> > +};
> > +
> > +struct qmc_chan_info {
> > +	enum qmc_mode mode;
> > +	unsigned long rx_fs_rate;
> > +	unsigned long rx_bit_rate;
> > +	u8 nb_rx_ts;
> > +	unsigned long tx_fs_rate;
> > +	unsigned long tx_bit_rate;
> > +	u8 nb_tx_ts;
> > +};
> > +
> > +int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info
> > +*info);
> > +
> > +struct qmc_chan_param {
> > +	enum qmc_mode mode;
> > +	union {
> > +		struct {
> > +			u16 max_rx_buf_size;
> > +			u16 max_rx_frame_size;
> > +			bool is_crc32;
> > +		} hdlc;
> > +		struct {
> > +			u16 max_rx_buf_size;
> > +		} transp;
> > +	};
> > +};
> > +
> > +int qmc_chan_set_param(struct qmc_chan *chan, const struct
> > +qmc_chan_param *param);
> > +
> > +int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> > +			  void (*complete)(void *context), void *context);
> > +
> > +int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> > +			 void (*complete)(void *context, size_t length),
> > +			 void *context);
> > +
> > +#define QMC_CHAN_READ  (1<<0)
> > +#define QMC_CHAN_WRITE (1<<1)
> > +#define QMC_CHAN_ALL   (QMC_CHAN_READ | QMC_CHAN_WRITE)
> > +
> > +int qmc_chan_start(struct qmc_chan *chan, int direction); int
> > +qmc_chan_stop(struct qmc_chan *chan, int direction); int
> > +qmc_chan_reset(struct qmc_chan *chan, int direction);
> > +
> > +#endif /* __SOC_FSL_QMC_H__ */
Leo Li Feb. 15, 2023, 10:44 p.m. UTC | #5
> -----Original Message-----
> From: Herve Codina <herve.codina@bootlin.com>
> Sent: Thursday, January 26, 2023 2:32 AM
> To: Herve Codina <herve.codina@bootlin.com>; Leo Li
> <leoyang.li@nxp.com>; Rob Herring <robh+dt@kernel.org>; Krzysztof
> Kozlowski <krzysztof.kozlowski+dt@linaro.org>; Liam Girdwood
> <lgirdwood@gmail.com>; Mark Brown <broonie@kernel.org>; Christophe
> Leroy <christophe.leroy@csgroup.eu>; Michael Ellerman
> <mpe@ellerman.id.au>; Nicholas Piggin <npiggin@gmail.com>; Qiang Zhao
> <qiang.zhao@nxp.com>; Jaroslav Kysela <perex@perex.cz>; Takashi Iwai
> <tiwai@suse.com>; Shengjiu Wang <shengjiu.wang@gmail.com>; Xiubo Li
> <Xiubo.Lee@gmail.com>; Fabio Estevam <festevam@gmail.com>; Nicolin
> Chen <nicoleotsuka@gmail.com>
> Cc: linuxppc-dev@lists.ozlabs.org; linux-arm-kernel@lists.infradead.org;
> devicetree@vger.kernel.org; linux-kernel@vger.kernel.org; alsa-devel@alsa-
> project.org; Thomas Petazzoni <thomas.petazzoni@bootlin.com>
> Subject: [PATCH v4 06/10] soc: fsl: cmp1: Add support for QMC

Typo: cpm1

> 
> The QMC (QUICC Multichannel Controller) emulates up to 64 channels within
> one serial controller using the same TDM physical interface routed from the
> TSA.
> 
> It is available in some	PowerQUICC SoC such as the
> MPC885 or MPC866.
> 
> It is also available on some Quicc Engine SoCs.
> This current version support CPM1 SoCs only and some enhancement are
> needed to support Quicc Engine SoCs.
> 
> Signed-off-by: Herve Codina <herve.codina@bootlin.com>

Otherwise looks good to me.

Acked-by: Li Yang <leoyang.li@nxp.com>

> ---
>  drivers/soc/fsl/qe/Kconfig  |   12 +
>  drivers/soc/fsl/qe/Makefile |    1 +
>  drivers/soc/fsl/qe/qmc.c    | 1533
> +++++++++++++++++++++++++++++++++++
>  include/soc/fsl/qe/qmc.h    |   71 ++
>  4 files changed, 1617 insertions(+)
>  create mode 100644 drivers/soc/fsl/qe/qmc.c  create mode 100644
> include/soc/fsl/qe/qmc.h
> 
> diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig index
> 60ec11c9f4d9..25b218351ae3 100644
> --- a/drivers/soc/fsl/qe/Kconfig
> +++ b/drivers/soc/fsl/qe/Kconfig
> @@ -44,6 +44,18 @@ config CPM_TSA
>  	  This option enables support for this
>  	  controller
> 
> +config CPM_QMC
> +	tristate "CPM QMC support"
> +	depends on OF && HAS_IOMEM
> +	depends on CPM1 || (PPC && COMPILE_TEST)
> +	depends on CPM_TSA
> +	help
> +	  Freescale CPM QUICC Multichannel Controller
> +	  (QMC)
> +
> +	  This option enables support for this
> +	  controller
> +
>  config QE_TDM
>  	bool
>  	default y if FSL_UCC_HDLC
> diff --git a/drivers/soc/fsl/qe/Makefile b/drivers/soc/fsl/qe/Makefile index
> 45c961acc81b..ec8506e13113 100644
> --- a/drivers/soc/fsl/qe/Makefile
> +++ b/drivers/soc/fsl/qe/Makefile
> @@ -5,6 +5,7 @@
>  obj-$(CONFIG_QUICC_ENGINE)+= qe.o qe_common.o qe_ic.o qe_io.o
>  obj-$(CONFIG_CPM)	+= qe_common.o
>  obj-$(CONFIG_CPM_TSA)	+= tsa.o
> +obj-$(CONFIG_CPM_QMC)	+= qmc.o
>  obj-$(CONFIG_UCC)	+= ucc.o
>  obj-$(CONFIG_UCC_SLOW)	+= ucc_slow.o
>  obj-$(CONFIG_UCC_FAST)	+= ucc_fast.o
> diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c new file
> mode 100644 index 000000000000..cfa7207353e0
> --- /dev/null
> +++ b/drivers/soc/fsl/qe/qmc.c
> @@ -0,0 +1,1533 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * QMC driver
> + *
> + * Copyright 2022 CS GROUP France
> + *
> + * Author: Herve Codina <herve.codina@bootlin.com>  */
> +
> +#include <soc/fsl/qe/qmc.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/hdlc.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <soc/fsl/cpm.h>
> +#include <sysdev/fsl_soc.h>
> +#include "tsa.h"
> +
> +/* SCC general mode register high (32 bits) */
> +#define SCC_GSMRL	0x00
> +#define SCC_GSMRL_ENR		(1 << 5)
> +#define SCC_GSMRL_ENT		(1 << 4)
> +#define SCC_GSMRL_MODE_QMC	(0x0A << 0)
> +
> +/* SCC general mode register low (32 bits) */
> +#define SCC_GSMRH	0x04
> +#define   SCC_GSMRH_CTSS	(1 << 7)
> +#define   SCC_GSMRH_CDS		(1 << 8)
> +#define   SCC_GSMRH_CTSP	(1 << 9)
> +#define   SCC_GSMRH_CDP		(1 << 10)
> +
> +/* SCC event register (16 bits) */
> +#define SCC_SCCE	0x10
> +#define   SCC_SCCE_IQOV		(1 << 3)
> +#define   SCC_SCCE_GINT		(1 << 2)
> +#define   SCC_SCCE_GUN		(1 << 1)
> +#define   SCC_SCCE_GOV		(1 << 0)
> +
> +/* SCC mask register (16 bits) */
> +#define SCC_SCCM	0x14
> +/* Multichannel base pointer (32 bits) */
> +#define QMC_GBL_MCBASE		0x00
> +/* Multichannel controller state (16 bits) */
> +#define QMC_GBL_QMCSTATE	0x04
> +/* Maximum receive buffer length (16 bits) */
> +#define QMC_GBL_MRBLR		0x06
> +/* Tx time-slot assignment table pointer (16 bits) */
> +#define QMC_GBL_TX_S_PTR	0x08
> +/* Rx pointer (16 bits) */
> +#define QMC_GBL_RXPTR		0x0A
> +/* Global receive frame threshold (16 bits) */
> +#define QMC_GBL_GRFTHR		0x0C
> +/* Global receive frame count (16 bits) */
> +#define QMC_GBL_GRFCNT		0x0E
> +/* Multichannel interrupt base address (32 bits) */
> +#define QMC_GBL_INTBASE		0x10
> +/* Multichannel interrupt pointer (32 bits) */
> +#define QMC_GBL_INTPTR		0x14
> +/* Rx time-slot assignment table pointer (16 bits) */
> +#define QMC_GBL_RX_S_PTR	0x18
> +/* Tx pointer (16 bits) */
> +#define QMC_GBL_TXPTR		0x1A
> +/* CRC constant (32 bits) */
> +#define QMC_GBL_C_MASK32	0x1C
> +/* Time slot assignment table Rx (32 x 16 bits) */
> +#define QMC_GBL_TSATRX		0x20
> +/* Time slot assignment table Tx (32 x 16 bits) */
> +#define QMC_GBL_TSATTX		0x60
> +/* CRC constant (16 bits) */
> +#define QMC_GBL_C_MASK16	0xA0
> +
> +/* TSA entry (16bit entry in TSATRX and TSATTX) */
> +#define QMC_TSA_VALID		(1 << 15)
> +#define QMC_TSA_WRAP		(1 << 14)
> +#define QMC_TSA_MASK		(0x303F)
> +#define QMC_TSA_CHANNEL(x)	((x) << 6)
> +
> +/* Tx buffer descriptor base address (16 bits, offset from MCBASE) */
> +#define QMC_SPE_TBASE	0x00
> +
> +/* Channel mode register (16 bits) */
> +#define QMC_SPE_CHAMR	0x02
> +#define   QMC_SPE_CHAMR_MODE_HDLC	(1 << 15)
> +#define   QMC_SPE_CHAMR_MODE_TRANSP	((0 << 15) | (1 << 13))
> +#define   QMC_SPE_CHAMR_ENT		(1 << 12)
> +#define   QMC_SPE_CHAMR_POL		(1 << 8)
> +#define   QMC_SPE_CHAMR_HDLC_IDLM	(1 << 13)
> +#define   QMC_SPE_CHAMR_HDLC_CRC	(1 << 7)
> +#define   QMC_SPE_CHAMR_HDLC_NOF	(0x0f << 0)
> +#define   QMC_SPE_CHAMR_TRANSP_RD	(1 << 14)
> +#define   QMC_SPE_CHAMR_TRANSP_SYNC	(1 << 10)
> +
> +/* Tx internal state (32 bits) */
> +#define QMC_SPE_TSTATE	0x04
> +/* Tx buffer descriptor pointer (16 bits) */
> +#define QMC_SPE_TBPTR	0x0C
> +/* Zero-insertion state (32 bits) */
> +#define QMC_SPE_ZISTATE	0x14
> +/* Channel’s interrupt mask flags (16 bits) */
> +#define QMC_SPE_INTMSK	0x1C
> +/* Rx buffer descriptor base address (16 bits, offset from MCBASE) */
> +#define QMC_SPE_RBASE	0x20
> +/* HDLC: Maximum frame length register (16 bits) */
> +#define QMC_SPE_MFLR	0x22
> +/* TRANSPARENT: Transparent maximum receive length (16 bits) */
> +#define QMC_SPE_TMRBLR	0x22
> +/* Rx internal state (32 bits) */
> +#define QMC_SPE_RSTATE	0x24
> +/* Rx buffer descriptor pointer (16 bits) */
> +#define QMC_SPE_RBPTR	0x2C
> +/* Packs 4 bytes to 1 long word before writing to buffer (32 bits) */
> +#define QMC_SPE_RPACK	0x30
> +/* Zero deletion state (32 bits) */
> +#define QMC_SPE_ZDSTATE	0x34
> +
> +/* Transparent synchronization (16 bits) */ #define QMC_SPE_TRNSYNC
> +0x3C
> +#define   QMC_SPE_TRNSYNC_RX(x)	((x) << 8)
> +#define   QMC_SPE_TRNSYNC_TX(x)	((x) << 0)
> +
> +/* Interrupt related registers bits */
> +#define QMC_INT_V		(1 << 15)
> +#define QMC_INT_W		(1 << 14)
> +#define QMC_INT_NID		(1 << 13)
> +#define QMC_INT_IDL		(1 << 12)
> +#define QMC_INT_GET_CHANNEL(x)	(((x) & 0x0FC0) >> 6)
> +#define QMC_INT_MRF		(1 << 5)
> +#define QMC_INT_UN		(1 << 4)
> +#define QMC_INT_RXF		(1 << 3)
> +#define QMC_INT_BSY		(1 << 2)
> +#define QMC_INT_TXB		(1 << 1)
> +#define QMC_INT_RXB		(1 << 0)
> +
> +/* BD related registers bits */
> +#define QMC_BD_RX_E	(1 << 15)
> +#define QMC_BD_RX_W	(1 << 13)
> +#define QMC_BD_RX_I	(1 << 12)
> +#define QMC_BD_RX_L	(1 << 11)
> +#define QMC_BD_RX_F	(1 << 10)
> +#define QMC_BD_RX_CM	(1 << 9)
> +#define QMC_BD_RX_UB	(1 << 7)
> +#define QMC_BD_RX_LG	(1 << 5)
> +#define QMC_BD_RX_NO	(1 << 4)
> +#define QMC_BD_RX_AB	(1 << 3)
> +#define QMC_BD_RX_CR	(1 << 2)
> +
> +#define QMC_BD_TX_R	(1 << 15)
> +#define QMC_BD_TX_W	(1 << 13)
> +#define QMC_BD_TX_I	(1 << 12)
> +#define QMC_BD_TX_L	(1 << 11)
> +#define QMC_BD_TX_TC	(1 << 10)
> +#define QMC_BD_TX_CM	(1 << 9)
> +#define QMC_BD_TX_UB	(1 << 7)
> +#define QMC_BD_TX_PAD	(0x0f << 0)
> +
> +/* Numbers of BDs and interrupt items */
> +#define QMC_NB_TXBDS	8
> +#define QMC_NB_RXBDS	8
> +#define QMC_NB_INTS	128
> +
> +struct qmc_xfer_desc {
> +	union {
> +		void (*tx_complete)(void *context);
> +		void (*rx_complete)(void *context, size_t length);
> +	};
> +	void *context;
> +};
> +
> +struct qmc_chan {
> +	struct list_head list;
> +	unsigned int id;
> +	struct qmc *qmc;
> +	void *__iomem s_param;
> +	enum qmc_mode mode;
> +	u64	tx_ts_mask;
> +	u64	rx_ts_mask;
> +	bool is_reverse_data;
> +
> +	spinlock_t	tx_lock;
> +	cbd_t __iomem *txbds;
> +	cbd_t __iomem *txbd_free;
> +	cbd_t __iomem *txbd_done;
> +	struct qmc_xfer_desc tx_desc[QMC_NB_TXBDS];
> +	u64	nb_tx_underrun;
> +	bool	is_tx_stopped;
> +
> +	spinlock_t	rx_lock;
> +	cbd_t __iomem *rxbds;
> +	cbd_t __iomem *rxbd_free;
> +	cbd_t __iomem *rxbd_done;
> +	struct qmc_xfer_desc rx_desc[QMC_NB_RXBDS];
> +	u64	nb_rx_busy;
> +	int	rx_pending;
> +	bool	is_rx_halted;
> +	bool	is_rx_stopped;
> +};
> +
> +struct qmc {
> +	struct device *dev;
> +	struct tsa_serial *tsa_serial;
> +	void *__iomem scc_regs;
> +	void *__iomem scc_pram;
> +	void *__iomem dpram;
> +	u16 scc_pram_offset;
> +	cbd_t __iomem *bd_table;
> +	dma_addr_t bd_dma_addr;
> +	size_t bd_size;
> +	u16 __iomem *int_table;
> +	u16 __iomem *int_curr;
> +	dma_addr_t int_dma_addr;
> +	size_t int_size;
> +	struct list_head chan_head;
> +	struct qmc_chan *chans[64];
> +};
> +
> +static inline void qmc_write16(void *__iomem addr, u16 val) {
> +	iowrite16be(val, addr);
> +}
> +
> +static inline u16 qmc_read16(void *__iomem addr) {
> +	return ioread16be(addr);
> +}
> +
> +static inline void qmc_setbits16(void *__iomem addr, u16 set) {
> +	qmc_write16(addr, qmc_read16(addr) | set); }
> +
> +static inline void qmc_clrbits16(void *__iomem addr, u16 clr) {
> +	qmc_write16(addr, qmc_read16(addr) & ~clr); }
> +
> +static inline void qmc_write32(void *__iomem addr, u32 val) {
> +	iowrite32be(val, addr);
> +}
> +
> +static inline u32 qmc_read32(void *__iomem addr) {
> +	return ioread32be(addr);
> +}
> +
> +static inline void qmc_setbits32(void *__iomem addr, u32 set) {
> +	qmc_write32(addr, qmc_read32(addr) | set); }
> +
> +
> +int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info
> *info)
> +{
> +	struct tsa_serial_info tsa_info;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(chan->qmc->tsa_serial, &tsa_info);
> +	if (ret)
> +		return ret;
> +
> +	info->mode = chan->mode;
> +	info->rx_fs_rate = tsa_info.rx_fs_rate;
> +	info->rx_bit_rate = tsa_info.rx_bit_rate;
> +	info->nb_tx_ts = hweight64(chan->tx_ts_mask);
> +	info->tx_fs_rate = tsa_info.tx_fs_rate;
> +	info->tx_bit_rate = tsa_info.tx_bit_rate;
> +	info->nb_rx_ts = hweight64(chan->rx_ts_mask);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_get_info);
> +
> +int qmc_chan_set_param(struct qmc_chan *chan, const struct
> qmc_chan_param *param)
> +{
> +	if (param->mode != chan->mode)
> +		return -EINVAL;
> +
> +	switch (param->mode) {
> +	case QMC_HDLC:
> +		if ((param->hdlc.max_rx_buf_size % 4) ||
> +		    (param->hdlc.max_rx_buf_size < 8))
> +			return -EINVAL;
> +
> +		qmc_write16(chan->qmc->scc_pram + QMC_GBL_MRBLR,
> +			    param->hdlc.max_rx_buf_size - 8);
> +		qmc_write16(chan->s_param + QMC_SPE_MFLR,
> +			    param->hdlc.max_rx_frame_size);
> +		if (param->hdlc.is_crc32) {
> +			qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> +				      QMC_SPE_CHAMR_HDLC_CRC);
> +		} else {
> +			qmc_clrbits16(chan->s_param + QMC_SPE_CHAMR,
> +				      QMC_SPE_CHAMR_HDLC_CRC);
> +		}
> +		break;
> +
> +	case QMC_TRANSPARENT:
> +		qmc_write16(chan->s_param + QMC_SPE_TMRBLR,
> +			    param->transp.max_rx_buf_size);
> +		break;
> +
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_set_param);
> +
> +int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> +			  void (*complete)(void *context), void *context)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +	int ret;
> +
> +	/*
> +	 * R bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +	bd = chan->txbd_free;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	if (ctrl & (QMC_BD_TX_R | QMC_BD_TX_UB)) {
> +		/* We are full ... */
> +		ret = -EBUSY;
> +		goto end;
> +	}
> +
> +	qmc_write16(&bd->cbd_datlen, length);
> +	qmc_write32(&bd->cbd_bufaddr, addr);
> +
> +	xfer_desc = &chan->tx_desc[bd - chan->txbds];
> +	xfer_desc->tx_complete = complete;
> +	xfer_desc->context = context;
> +
> +	/* Activate the descriptor */
> +	ctrl |= (QMC_BD_TX_R | QMC_BD_TX_UB);
> +	wmb(); /* Be sure to flush the descriptor before control update */
> +	qmc_write16(&bd->cbd_sc, ctrl);
> +
> +	if (!chan->is_tx_stopped)
> +		qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_POL);
> +
> +	if (ctrl & QMC_BD_TX_W)
> +		chan->txbd_free = chan->txbds;
> +	else
> +		chan->txbd_free++;
> +
> +	ret = 0;
> +
> +end:
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(qmc_chan_write_submit);
> +
> +static void qmc_chan_write_done(struct qmc_chan *chan)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	void (*complete)(void *context);
> +	unsigned long flags;
> +	void *context;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +
> +	/*
> +	 * R bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +	bd = chan->txbd_done;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	while (!(ctrl & QMC_BD_TX_R)) {
> +		if (!(ctrl & QMC_BD_TX_UB))
> +			goto end;
> +
> +		xfer_desc = &chan->tx_desc[bd - chan->txbds];
> +		complete = xfer_desc->tx_complete;
> +		context = xfer_desc->context;
> +		xfer_desc->tx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		qmc_write16(&bd->cbd_sc, ctrl & ~QMC_BD_TX_UB);
> +
> +		if (ctrl & QMC_BD_TX_W)
> +			chan->txbd_done = chan->txbds;
> +		else
> +			chan->txbd_done++;
> +
> +		if (complete) {
> +			spin_unlock_irqrestore(&chan->tx_lock, flags);
> +			complete(context);
> +			spin_lock_irqsave(&chan->tx_lock, flags);
> +		}
> +
> +		bd = chan->txbd_done;
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +	}
> +
> +end:
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +}
> +
> +int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> +			 void (*complete)(void *context, size_t length), void
> *context)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +	int ret;
> +
> +	/*
> +	 * E bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +	bd = chan->rxbd_free;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	if (ctrl & (QMC_BD_RX_E | QMC_BD_RX_UB)) {
> +		/* We are full ... */
> +		ret = -EBUSY;
> +		goto end;
> +	}
> +
> +	qmc_write16(&bd->cbd_datlen, 0); /* data length is updated by the
> QMC */
> +	qmc_write32(&bd->cbd_bufaddr, addr);
> +
> +	xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> +	xfer_desc->rx_complete = complete;
> +	xfer_desc->context = context;
> +
> +	/* Activate the descriptor */
> +	ctrl |= (QMC_BD_RX_E | QMC_BD_RX_UB);
> +	wmb(); /* Be sure to flush data before descriptor activation */
> +	qmc_write16(&bd->cbd_sc, ctrl);
> +
> +	/* Restart receiver if needed */
> +	if (chan->is_rx_halted && !chan->is_rx_stopped) {
> +		/* Restart receiver */
> +		if (chan->mode == QMC_TRANSPARENT)
> +			qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x18000080);
> +		else
> +			qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x00000080);
> +		qmc_write32(chan->s_param + QMC_SPE_RSTATE,
> 0x31000000);
> +		chan->is_rx_halted = false;
> +	}
> +	chan->rx_pending++;
> +
> +	if (ctrl & QMC_BD_RX_W)
> +		chan->rxbd_free = chan->rxbds;
> +	else
> +		chan->rxbd_free++;
> +
> +	ret = 0;
> +end:
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(qmc_chan_read_submit);
> +
> +static void qmc_chan_read_done(struct qmc_chan *chan)
> +{
> +	void (*complete)(void *context, size_t size);
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	void *context;
> +	u16 datalen;
> +	u16 ctrl;
> +
> +	/*
> +	 * E bit  UB bit
> +	 *   0       0  : The BD is free
> +	 *   1       1  : The BD is in used, waiting for transfer
> +	 *   0       1  : The BD is in used, waiting for completion
> +	 *   1       0  : Should not append
> +	 */
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +	bd = chan->rxbd_done;
> +
> +	ctrl = qmc_read16(&bd->cbd_sc);
> +	while (!(ctrl & QMC_BD_RX_E)) {
> +		if (!(ctrl & QMC_BD_RX_UB))
> +			goto end;
> +
> +		xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> +		complete = xfer_desc->rx_complete;
> +		context = xfer_desc->context;
> +		xfer_desc->rx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		datalen = qmc_read16(&bd->cbd_datlen);
> +		qmc_write16(&bd->cbd_sc, ctrl & ~QMC_BD_RX_UB);
> +
> +		if (ctrl & QMC_BD_RX_W)
> +			chan->rxbd_done = chan->rxbds;
> +		else
> +			chan->rxbd_done++;
> +
> +		chan->rx_pending--;
> +
> +		if (complete) {
> +			spin_unlock_irqrestore(&chan->rx_lock, flags);
> +			complete(context, datalen);
> +			spin_lock_irqsave(&chan->rx_lock, flags);
> +		}
> +
> +		bd = chan->rxbd_done;
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +	}
> +
> +end:
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +}
> +
> +static int qmc_chan_command(struct qmc_chan *chan, u8 qmc_opcode)
> +{
> +	return cpm_command(chan->id << 2, (qmc_opcode << 4) | 0x0E);
> +}
> +
> +static int qmc_chan_stop_rx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +	int ret;
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +
> +	/* Send STOP RECEIVE command */
> +	ret = qmc_chan_command(chan, 0x0);
> +	if (ret) {
> +		dev_err(chan->qmc->dev, "chan %u: Send STOP RECEIVE
> failed (%d)\n",
> +			chan->id, ret);
> +		goto end;
> +	}
> +
> +	chan->is_rx_stopped = true;
> +
> +end:
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +	return ret;
> +}
> +
> +static int qmc_chan_stop_tx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +	int ret;
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +
> +	/* Send STOP TRANSMIT command */
> +	ret = qmc_chan_command(chan, 0x1);
> +	if (ret) {
> +		dev_err(chan->qmc->dev, "chan %u: Send STOP TRANSMIT
> failed (%d)\n",
> +			chan->id, ret);
> +		goto end;
> +	}
> +
> +	chan->is_tx_stopped = true;
> +
> +end:
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +	return ret;
> +}
> +
> +int qmc_chan_stop(struct qmc_chan *chan, int direction)
> +{
> +	int ret;
> +
> +	if (direction & QMC_CHAN_READ) {
> +		ret = qmc_chan_stop_rx(chan);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (direction & QMC_CHAN_WRITE) {
> +		ret = qmc_chan_stop_tx(chan);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_stop);
> +
> +static void qmc_chan_start_rx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +
> +	/* Restart the receiver */
> +	if (chan->mode == QMC_TRANSPARENT)
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x18000080);
> +	else
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x00000080);
> +	qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> +	chan->is_rx_halted = false;
> +
> +	chan->is_rx_stopped = false;
> +
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +}
> +
> +static void qmc_chan_start_tx(struct qmc_chan *chan)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +
> +	/*
> +	 * Enable channel transmitter as it could be disabled if
> +	 * qmc_chan_reset() was called.
> +	 */
> +	qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_ENT);
> +
> +	/* Set the POL bit in the channel mode register */
> +	qmc_setbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_POL);
> +
> +	chan->is_tx_stopped = false;
> +
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +}
> +
> +int qmc_chan_start(struct qmc_chan *chan, int direction)
> +{
> +	if (direction & QMC_CHAN_READ)
> +		qmc_chan_start_rx(chan);
> +
> +	if (direction & QMC_CHAN_WRITE)
> +		qmc_chan_start_tx(chan);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_start);
> +
> +static void qmc_chan_reset_rx(struct qmc_chan *chan)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +
> +	spin_lock_irqsave(&chan->rx_lock, flags);
> +	bd = chan->rxbds;
> +	do {
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +		qmc_write16(&bd->cbd_sc, ctrl & ~(QMC_BD_RX_UB |
> QMC_BD_RX_E));
> +
> +		xfer_desc = &chan->rx_desc[bd - chan->rxbds];
> +		xfer_desc->rx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		bd++;
> +	} while (!(ctrl & QMC_BD_RX_W));
> +
> +	chan->rxbd_free = chan->rxbds;
> +	chan->rxbd_done = chan->rxbds;
> +	qmc_write16(chan->s_param + QMC_SPE_RBPTR,
> +		    qmc_read16(chan->s_param + QMC_SPE_RBASE));
> +
> +	chan->rx_pending = 0;
> +	chan->is_rx_stopped = false;
> +
> +	spin_unlock_irqrestore(&chan->rx_lock, flags);
> +}
> +
> +static void qmc_chan_reset_tx(struct qmc_chan *chan)
> +{
> +	struct qmc_xfer_desc *xfer_desc;
> +	unsigned long flags;
> +	cbd_t *__iomem bd;
> +	u16 ctrl;
> +
> +	spin_lock_irqsave(&chan->tx_lock, flags);
> +
> +	/* Disable transmitter. It will be re-enable on qmc_chan_start() */
> +	qmc_clrbits16(chan->s_param + QMC_SPE_CHAMR,
> QMC_SPE_CHAMR_ENT);
> +
> +	bd = chan->txbds;
> +	do {
> +		ctrl = qmc_read16(&bd->cbd_sc);
> +		qmc_write16(&bd->cbd_sc, ctrl & ~(QMC_BD_TX_UB |
> QMC_BD_TX_R));
> +
> +		xfer_desc = &chan->tx_desc[bd - chan->txbds];
> +		xfer_desc->tx_complete = NULL;
> +		xfer_desc->context = NULL;
> +
> +		bd++;
> +	} while (!(ctrl & QMC_BD_TX_W));
> +
> +	chan->txbd_free = chan->txbds;
> +	chan->txbd_done = chan->txbds;
> +	qmc_write16(chan->s_param + QMC_SPE_TBPTR,
> +		    qmc_read16(chan->s_param + QMC_SPE_TBASE));
> +
> +	/* Reset TSTATE and ZISTATE to their initial value */
> +	qmc_write32(chan->s_param + QMC_SPE_TSTATE, 0x30000000);
> +	qmc_write32(chan->s_param + QMC_SPE_ZISTATE, 0x00000100);
> +
> +	spin_unlock_irqrestore(&chan->tx_lock, flags);
> +}
> +
> +int qmc_chan_reset(struct qmc_chan *chan, int direction)
> +{
> +	if (direction & QMC_CHAN_READ)
> +		qmc_chan_reset_rx(chan);
> +
> +	if (direction & QMC_CHAN_WRITE)
> +		qmc_chan_reset_tx(chan);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(qmc_chan_reset);
> +
> +static int qmc_check_chans(struct qmc *qmc)
> +{
> +	struct tsa_serial_info info;
> +	bool is_one_table = false;
> +	struct qmc_chan *chan;
> +	u64 tx_ts_mask = 0;
> +	u64 rx_ts_mask = 0;
> +	u64 tx_ts_assigned_mask;
> +	u64 rx_ts_assigned_mask;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(qmc->tsa_serial, &info);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * If more than 32 TS are assigned to this serial, one common table is
> +	 * used for Tx and Rx and so masks must be equal for all channels.
> +	 */
> +	if ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) {
> +		if (info.nb_tx_ts != info.nb_rx_ts) {
> +			dev_err(qmc->dev, "Number of TSA Tx/Rx TS
> assigned are not equal\n");
> +			return -EINVAL;
> +		}
> +		is_one_table = true;
> +	}
> +
> +
> +	tx_ts_assigned_mask = (((u64)1) << info.nb_tx_ts) - 1;
> +	rx_ts_assigned_mask = (((u64)1) << info.nb_rx_ts) - 1;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		if (chan->tx_ts_mask > tx_ts_assigned_mask) {
> +			dev_err(qmc->dev, "chan %u uses TSA unassigned Tx
> TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +		if (tx_ts_mask & chan->tx_ts_mask) {
> +			dev_err(qmc->dev, "chan %u uses an already used
> Tx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +
> +		if (chan->rx_ts_mask > rx_ts_assigned_mask) {
> +			dev_err(qmc->dev, "chan %u uses TSA unassigned
> Rx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +		if (rx_ts_mask & chan->rx_ts_mask) {
> +			dev_err(qmc->dev, "chan %u uses an already used
> Rx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +
> +		if (is_one_table && (chan->tx_ts_mask != chan-
> >rx_ts_mask)) {
> +			dev_err(qmc->dev, "chan %u uses different Rx and
> Tx TS\n", chan->id);
> +			return -EINVAL;
> +		}
> +
> +		tx_ts_mask |= chan->tx_ts_mask;
> +		rx_ts_mask |= chan->rx_ts_mask;
> +	}
> +
> +	return 0;
> +}
> +
> +static unsigned int qmc_nb_chans(struct qmc *qmc)
> +{
> +	unsigned int count = 0;
> +	struct qmc_chan *chan;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list)
> +		count++;
> +
> +	return count;
> +}
> +
> +static int qmc_of_parse_chans(struct qmc *qmc, struct device_node *np)
> +{
> +	struct device_node *chan_np;
> +	struct qmc_chan *chan;
> +	const char *mode;
> +	u32 chan_id;
> +	u64 ts_mask;
> +	int ret;
> +
> +	for_each_available_child_of_node(np, chan_np) {
> +		ret = of_property_read_u32(chan_np, "reg", &chan_id);
> +		if (ret) {
> +			dev_err(qmc->dev, "%pOF: failed to read reg\n",
> chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		if (chan_id > 63) {
> +			dev_err(qmc->dev, "%pOF: Invalid chan_id\n",
> chan_np);
> +			of_node_put(chan_np);
> +			return -EINVAL;
> +		}
> +
> +		chan = devm_kzalloc(qmc->dev, sizeof(*chan), GFP_KERNEL);
> +		if (!chan) {
> +			of_node_put(chan_np);
> +			return -ENOMEM;
> +		}
> +
> +		chan->id = chan_id;
> +		spin_lock_init(&chan->rx_lock);
> +		spin_lock_init(&chan->tx_lock);
> +
> +		ret = of_property_read_u64(chan_np, "fsl,tx-ts-mask",
> &ts_mask);
> +		if (ret) {
> +			dev_err(qmc->dev, "%pOF: failed to read fsl,tx-ts-
> mask\n",
> +				chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		chan->tx_ts_mask = ts_mask;
> +
> +		ret = of_property_read_u64(chan_np, "fsl,rx-ts-mask",
> &ts_mask);
> +		if (ret) {
> +			dev_err(qmc->dev, "%pOF: failed to read fsl,rx-ts-
> mask\n",
> +				chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		chan->rx_ts_mask = ts_mask;
> +
> +		mode = "transparent";
> +		ret = of_property_read_string(chan_np, "fsl,operational-
> mode", &mode);
> +		if (ret && ret != -EINVAL) {
> +			dev_err(qmc->dev, "%pOF: failed to read
> fsl,operational-mode\n",
> +				chan_np);
> +			of_node_put(chan_np);
> +			return ret;
> +		}
> +		if (!strcmp(mode, "transparent")) {
> +			chan->mode = QMC_TRANSPARENT;
> +		} else if (!strcmp(mode, "hdlc")) {
> +			chan->mode = QMC_HDLC;
> +		} else {
> +			dev_err(qmc->dev, "%pOF: Invalid fsl,operational-
> mode (%s)\n",
> +				chan_np, mode);
> +			of_node_put(chan_np);
> +			return -EINVAL;
> +		}
> +
> +		chan->is_reverse_data = of_property_read_bool(chan_np,
> +							      "fsl,reverse-data");
> +
> +		list_add_tail(&chan->list, &qmc->chan_head);
> +		qmc->chans[chan->id] = chan;
> +	}
> +
> +	return qmc_check_chans(qmc);
> +}
> +
> +static int qmc_setup_tsa_64rxtx(struct qmc *qmc, const struct
> tsa_serial_info *info)
> +{
> +	struct qmc_chan *chan;
> +	unsigned int i;
> +	u16 val;
> +
> +	/*
> +	 * Use a common Tx/Rx 64 entries table.
> +	 * Everything was previously checked, Tx and Rx related stuffs are
> +	 * identical -> Used Rx related stuff to build the table
> +	 */
> +
> +	/* Invalidate all entries */
> +	for (i = 0; i < 64; i++)
> +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2),
> 0x0000);
> +
> +	/* Set entries based on Rx stuff*/
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		for (i = 0; i < info->nb_rx_ts; i++) {
> +			if (!(chan->rx_ts_mask & (((u64)1) << i)))
> +				continue;
> +
> +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> +			      QMC_TSA_CHANNEL(chan->id);
> +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX +
> (i * 2), val);
> +		}
> +	}
> +
> +	/* Set Wrap bit on last entry */
> +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info-
> >nb_rx_ts - 1) * 2),
> +		      QMC_TSA_WRAP);
> +
> +	/* Init pointers to the table */
> +	val = qmc->scc_pram_offset + QMC_GBL_TSATRX;
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RXPTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TXPTR, val);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_tsa_32rx_32tx(struct qmc *qmc, const struct
> tsa_serial_info *info)
> +{
> +	struct qmc_chan *chan;
> +	unsigned int i;
> +	u16 val;
> +
> +	/*
> +	 * Use a Tx 32 entries table and a Rx 32 entries table.
> +	 * Everything was previously checked.
> +	 */
> +
> +	/* Invalidate all entries */
> +	for (i = 0; i < 32; i++) {
> +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2),
> 0x0000);
> +		qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX + (i * 2),
> 0x0000);
> +	}
> +
> +	/* Set entries based on Rx and Tx stuff*/
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		/* Rx part */
> +		for (i = 0; i < info->nb_rx_ts; i++) {
> +			if (!(chan->rx_ts_mask & (((u64)1) << i)))
> +				continue;
> +
> +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> +			      QMC_TSA_CHANNEL(chan->id);
> +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX +
> (i * 2), val);
> +		}
> +		/* Tx part */
> +		for (i = 0; i < info->nb_tx_ts; i++) {
> +			if (!(chan->tx_ts_mask & (((u64)1) << i)))
> +				continue;
> +
> +			val = QMC_TSA_VALID | QMC_TSA_MASK |
> +			      QMC_TSA_CHANNEL(chan->id);
> +			qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX +
> (i * 2), val);
> +		}
> +	}
> +
> +	/* Set Wrap bit on last entries */
> +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info-
> >nb_rx_ts - 1) * 2),
> +		      QMC_TSA_WRAP);
> +	qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATTX + ((info-
> >nb_tx_ts - 1) * 2),
> +		      QMC_TSA_WRAP);
> +
> +	/* Init Rx pointers ...*/
> +	val = qmc->scc_pram_offset + QMC_GBL_TSATRX;
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_RXPTR, val);
> +
> +	/* ... and Tx pointers */
> +	val = qmc->scc_pram_offset + QMC_GBL_TSATTX;
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TX_S_PTR, val);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_TXPTR, val);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_tsa(struct qmc *qmc)
> +{
> +	struct tsa_serial_info info;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(qmc->tsa_serial, &info);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * Setup one common 64 entries table or two 32 entries (one for Tx
> and
> +	 * one for Tx) according to assigned TS numbers.
> +	 */
> +	return ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) ?
> +		qmc_setup_tsa_64rxtx(qmc, &info) :
> +		qmc_setup_tsa_32rx_32tx(qmc, &info);
> +}
> +
> +static int qmc_setup_chan_trnsync(struct qmc *qmc, struct qmc_chan
> *chan)
> +{
> +	struct tsa_serial_info info;
> +	u16 first_rx, last_tx;
> +	u16 trnsync;
> +	int ret;
> +
> +	/* Retrieve info from the TSA related serial */
> +	ret = tsa_serial_get_info(chan->qmc->tsa_serial, &info);
> +	if (ret)
> +		return ret;
> +
> +	/* Find the first Rx TS allocated to the channel */
> +	first_rx = chan->rx_ts_mask ? __ffs64(chan->rx_ts_mask) + 1 : 0;
> +
> +	/* Find the last Tx TS allocated to the channel */
> +	last_tx = fls64(chan->tx_ts_mask);
> +
> +	trnsync = 0;
> +	if (info.nb_rx_ts)
> +		trnsync |= QMC_SPE_TRNSYNC_RX((first_rx % info.nb_rx_ts)
> * 2);
> +	if (info.nb_tx_ts)
> +		trnsync |= QMC_SPE_TRNSYNC_TX((last_tx % info.nb_tx_ts)
> * 2);
> +
> +	qmc_write16(chan->s_param + QMC_SPE_TRNSYNC, trnsync);
> +
> +	dev_dbg(qmc->dev, "chan %u: trnsync=0x%04x, rx %u/%u 0x%llx,
> tx %u/%u 0x%llx\n",
> +		chan->id, trnsync,
> +		first_rx, info.nb_rx_ts, chan->rx_ts_mask,
> +		last_tx, info.nb_tx_ts, chan->tx_ts_mask);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_chan(struct qmc *qmc, struct qmc_chan *chan)
> +{
> +	unsigned int i;
> +	cbd_t __iomem *bd;
> +	int ret;
> +	u16 val;
> +
> +	chan->qmc = qmc;
> +
> +	/* Set channel specific parameter base address */
> +	chan->s_param = qmc->dpram + (chan->id * 64);
> +	/* 16 bd per channel (8 rx and 8 tx) */
> +	chan->txbds = qmc->bd_table + (chan->id * (QMC_NB_TXBDS +
> QMC_NB_RXBDS));
> +	chan->rxbds = qmc->bd_table + (chan->id * (QMC_NB_TXBDS +
> QMC_NB_RXBDS)) + QMC_NB_TXBDS;
> +
> +	chan->txbd_free = chan->txbds;
> +	chan->txbd_done = chan->txbds;
> +	chan->rxbd_free = chan->rxbds;
> +	chan->rxbd_done = chan->rxbds;
> +
> +	/* TBASE and TBPTR*/
> +	val = chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS) *
> sizeof(cbd_t);
> +	qmc_write16(chan->s_param + QMC_SPE_TBASE, val);
> +	qmc_write16(chan->s_param + QMC_SPE_TBPTR, val);
> +
> +	/* RBASE and RBPTR*/
> +	val = ((chan->id * (QMC_NB_TXBDS + QMC_NB_RXBDS)) +
> QMC_NB_TXBDS) * sizeof(cbd_t);
> +	qmc_write16(chan->s_param + QMC_SPE_RBASE, val);
> +	qmc_write16(chan->s_param + QMC_SPE_RBPTR, val);
> +	qmc_write32(chan->s_param + QMC_SPE_TSTATE, 0x30000000);
> +	qmc_write32(chan->s_param + QMC_SPE_RSTATE, 0x31000000);
> +	qmc_write32(chan->s_param + QMC_SPE_ZISTATE, 0x00000100);
> +	if (chan->mode == QMC_TRANSPARENT) {
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x18000080);
> +		qmc_write16(chan->s_param + QMC_SPE_TMRBLR, 60);
> +		val = QMC_SPE_CHAMR_MODE_TRANSP |
> QMC_SPE_CHAMR_TRANSP_SYNC;
> +		if (chan->is_reverse_data)
> +			val |= QMC_SPE_CHAMR_TRANSP_RD;
> +		qmc_write16(chan->s_param + QMC_SPE_CHAMR, val);
> +		ret = qmc_setup_chan_trnsync(qmc, chan);
> +		if (ret)
> +			return ret;
> +	} else {
> +		qmc_write32(chan->s_param + QMC_SPE_ZDSTATE,
> 0x00000080);
> +		qmc_write16(chan->s_param + QMC_SPE_MFLR, 60);
> +		qmc_write16(chan->s_param + QMC_SPE_CHAMR,
> +			QMC_SPE_CHAMR_MODE_HDLC |
> QMC_SPE_CHAMR_HDLC_IDLM);
> +	}
> +
> +	/* Do not enable interrupts now. They will be enabled later */
> +	qmc_write16(chan->s_param + QMC_SPE_INTMSK, 0x0000);
> +
> +	/* Init Rx BDs and set Wrap bit on last descriptor */
> +	BUILD_BUG_ON(QMC_NB_RXBDS == 0);
> +	val = QMC_BD_RX_I;
> +	for (i = 0; i < QMC_NB_RXBDS; i++) {
> +		bd = chan->rxbds + i;
> +		qmc_write16(&bd->cbd_sc, val);
> +	}
> +	bd = chan->rxbds + QMC_NB_RXBDS - 1;
> +	qmc_write16(&bd->cbd_sc, val | QMC_BD_RX_W);
> +
> +	/* Init Tx BDs and set Wrap bit on last descriptor */
> +	BUILD_BUG_ON(QMC_NB_TXBDS == 0);
> +	val = QMC_BD_TX_I;
> +	if (chan->mode == QMC_HDLC)
> +		val |= QMC_BD_TX_L | QMC_BD_TX_TC;
> +	for (i = 0; i < QMC_NB_TXBDS; i++) {
> +		bd = chan->txbds + i;
> +		qmc_write16(&bd->cbd_sc, val);
> +	}
> +	bd = chan->txbds + QMC_NB_TXBDS - 1;
> +	qmc_write16(&bd->cbd_sc, val | QMC_BD_TX_W);
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_chans(struct qmc *qmc)
> +{
> +	struct qmc_chan *chan;
> +	int ret;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		ret = qmc_setup_chan(qmc, chan);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int qmc_finalize_chans(struct qmc *qmc)
> +{
> +	struct qmc_chan *chan;
> +	int ret;
> +
> +	list_for_each_entry(chan, &qmc->chan_head, list) {
> +		/* Unmask channel interrupts */
> +		if (chan->mode == QMC_HDLC) {
> +			qmc_write16(chan->s_param + QMC_SPE_INTMSK,
> +				    QMC_INT_NID | QMC_INT_IDL |
> QMC_INT_MRF |
> +				    QMC_INT_UN | QMC_INT_RXF |
> QMC_INT_BSY |
> +				    QMC_INT_TXB | QMC_INT_RXB);
> +		} else {
> +			qmc_write16(chan->s_param + QMC_SPE_INTMSK,
> +				    QMC_INT_UN | QMC_INT_BSY |
> +				    QMC_INT_TXB | QMC_INT_RXB);
> +		}
> +
> +		/* Forced stop the channel */
> +		ret = qmc_chan_stop(chan, QMC_CHAN_ALL);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int qmc_setup_ints(struct qmc *qmc)
> +{
> +	unsigned int i;
> +	u16 __iomem *last;
> +
> +	/* Raz all entries */
> +	for (i = 0; i < (qmc->int_size / sizeof(u16)); i++)
> +		qmc_write16(qmc->int_table + i, 0x0000);
> +
> +	/* Set Wrap bit on last entry */
> +	if (qmc->int_size >= sizeof(u16)) {
> +		last = qmc->int_table + (qmc->int_size / sizeof(u16)) - 1;
> +		qmc_write16(last, QMC_INT_W);
> +	}
> +
> +	return 0;
> +}
> +
> +static void qmc_irq_gint(struct qmc *qmc)
> +{
> +	struct qmc_chan *chan;
> +	unsigned int chan_id;
> +	unsigned long flags;
> +	u16 int_entry;
> +
> +	int_entry = qmc_read16(qmc->int_curr);
> +	while (int_entry & QMC_INT_V) {
> +		/* Clear all but the Wrap bit */
> +		qmc_write16(qmc->int_curr, int_entry & QMC_INT_W);
> +
> +		chan_id = QMC_INT_GET_CHANNEL(int_entry);
> +		chan = qmc->chans[chan_id];
> +		if (!chan) {
> +			dev_err(qmc->dev, "interrupt on invalid chan %u\n",
> chan_id);
> +			goto int_next;
> +		}
> +
> +		if (int_entry & QMC_INT_TXB)
> +			qmc_chan_write_done(chan);
> +
> +		if (int_entry & QMC_INT_UN) {
> +			dev_info(qmc->dev, "intr chan %u, 0x%04x (UN)\n",
> chan_id,
> +				 int_entry);
> +			chan->nb_tx_underrun++;
> +		}
> +
> +		if (int_entry & QMC_INT_BSY) {
> +			dev_info(qmc->dev, "intr chan %u, 0x%04x (BSY)\n",
> chan_id,
> +				 int_entry);
> +			chan->nb_rx_busy++;
> +			/* Restart the receiver if needed */
> +			spin_lock_irqsave(&chan->rx_lock, flags);
> +			if (chan->rx_pending && !chan->is_rx_stopped) {
> +				if (chan->mode == QMC_TRANSPARENT)
> +					qmc_write32(chan->s_param +
> QMC_SPE_ZDSTATE, 0x18000080);
> +				else
> +					qmc_write32(chan->s_param +
> QMC_SPE_ZDSTATE, 0x00000080);
> +				qmc_write32(chan->s_param +
> QMC_SPE_RSTATE, 0x31000000);
> +				chan->is_rx_halted = false;
> +			} else {
> +				chan->is_rx_halted = true;
> +			}
> +			spin_unlock_irqrestore(&chan->rx_lock, flags);
> +		}
> +
> +		if (int_entry & QMC_INT_RXB)
> +			qmc_chan_read_done(chan);
> +
> +int_next:
> +		if (int_entry & QMC_INT_W)
> +			qmc->int_curr = qmc->int_table;
> +		else
> +			qmc->int_curr++;
> +		int_entry = qmc_read16(qmc->int_curr);
> +	}
> +}
> +
> +static irqreturn_t qmc_irq_handler(int irq, void *priv)
> +{
> +	struct qmc *qmc = (struct qmc *)priv;
> +	u16 scce;
> +
> +	scce = qmc_read16(qmc->scc_regs + SCC_SCCE);
> +	qmc_write16(qmc->scc_regs + SCC_SCCE, scce);
> +
> +	if (unlikely(scce & SCC_SCCE_IQOV))
> +		dev_info(qmc->dev, "IRQ queue overflow\n");
> +
> +	if (unlikely(scce & SCC_SCCE_GUN))
> +		dev_err(qmc->dev, "Global transmitter underrun\n");
> +
> +	if (unlikely(scce & SCC_SCCE_GOV))
> +		dev_err(qmc->dev, "Global receiver overrun\n");
> +
> +	/* normal interrupt */
> +	if (likely(scce & SCC_SCCE_GINT))
> +		qmc_irq_gint(qmc);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static int qmc_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	unsigned int nb_chans;
> +	struct resource *res;
> +	struct qmc *qmc;
> +	int irq;
> +	int ret;
> +
> +	qmc = devm_kzalloc(&pdev->dev, sizeof(*qmc), GFP_KERNEL);
> +	if (!qmc)
> +		return -ENOMEM;
> +
> +	qmc->dev = &pdev->dev;
> +	INIT_LIST_HEAD(&qmc->chan_head);
> +
> +	qmc->scc_regs = devm_platform_ioremap_resource_byname(pdev,
> "scc_regs");
> +	if (IS_ERR(qmc->scc_regs))
> +		return PTR_ERR(qmc->scc_regs);
> +
> +
> +	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
> "scc_pram");
> +	if (!res)
> +		return -EINVAL;
> +	qmc->scc_pram_offset = res->start - get_immrbase();
> +	qmc->scc_pram = devm_ioremap_resource(qmc->dev, res);
> +	if (IS_ERR(qmc->scc_pram))
> +		return PTR_ERR(qmc->scc_pram);
> +
> +	qmc->dpram  = devm_platform_ioremap_resource_byname(pdev,
> "dpram");
> +	if (IS_ERR(qmc->dpram))
> +		return PTR_ERR(qmc->dpram);
> +
> +	qmc->tsa_serial = devm_tsa_serial_get_byphandle(qmc->dev, np,
> "fsl,tsa-serial");
> +	if (IS_ERR(qmc->tsa_serial)) {
> +		return dev_err_probe(qmc->dev, PTR_ERR(qmc->tsa_serial),
> +				     "Failed to get TSA serial\n");
> +	}
> +
> +	/* Connect the serial (SCC) to TSA */
> +	ret = tsa_serial_connect(qmc->tsa_serial);
> +	if (ret) {
> +		dev_err(qmc->dev, "Failed to connect TSA serial\n");
> +		return ret;
> +	}
> +
> +	/* Parse channels informationss */
> +	ret = qmc_of_parse_chans(qmc, np);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	nb_chans = qmc_nb_chans(qmc);
> +
> +	/* Init GMSR_H and GMSR_L registers */
> +	qmc_write32(qmc->scc_regs + SCC_GSMRH,
> +		    SCC_GSMRH_CDS | SCC_GSMRH_CTSS | SCC_GSMRH_CDP
> | SCC_GSMRH_CTSP);
> +
> +	/* enable QMC mode */
> +	qmc_write32(qmc->scc_regs + SCC_GSMRL,
> SCC_GSMRL_MODE_QMC);
> +
> +	/*
> +	 * Allocate the buffer descriptor table
> +	 * 8 rx and 8 tx descriptors per channel
> +	 */
> +	qmc->bd_size = (nb_chans * (QMC_NB_TXBDS + QMC_NB_RXBDS))
> * sizeof(cbd_t);
> +	qmc->bd_table = dmam_alloc_coherent(qmc->dev, qmc->bd_size,
> +		&qmc->bd_dma_addr, GFP_KERNEL);
> +	if (!qmc->bd_table) {
> +		dev_err(qmc->dev, "Failed to allocate bd table\n");
> +		ret = -ENOMEM;
> +		goto err_tsa_serial_disconnect;
> +	}
> +	memset(qmc->bd_table, 0, qmc->bd_size);
> +
> +	qmc_write32(qmc->scc_pram + QMC_GBL_MCBASE, qmc-
> >bd_dma_addr);
> +
> +	/* Allocate the interrupt table */
> +	qmc->int_size = QMC_NB_INTS * sizeof(u16);
> +	qmc->int_table = dmam_alloc_coherent(qmc->dev, qmc->int_size,
> +		&qmc->int_dma_addr, GFP_KERNEL);
> +	if (!qmc->int_table) {
> +		dev_err(qmc->dev, "Failed to allocate interrupt table\n");
> +		ret = -ENOMEM;
> +		goto err_tsa_serial_disconnect;
> +	}
> +	memset(qmc->int_table, 0, qmc->int_size);
> +
> +	qmc->int_curr = qmc->int_table;
> +	qmc_write32(qmc->scc_pram + QMC_GBL_INTBASE, qmc-
> >int_dma_addr);
> +	qmc_write32(qmc->scc_pram + QMC_GBL_INTPTR, qmc-
> >int_dma_addr);
> +
> +	/* Set MRBLR (valid for HDLC only) max MRU + max CRC */
> +	qmc_write16(qmc->scc_pram + QMC_GBL_MRBLR,
> HDLC_MAX_MRU + 4);
> +
> +	qmc_write16(qmc->scc_pram + QMC_GBL_GRFTHR, 1);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_GRFCNT, 1);
> +
> +	qmc_write32(qmc->scc_pram + QMC_GBL_C_MASK32, 0xDEBB20E3);
> +	qmc_write16(qmc->scc_pram + QMC_GBL_C_MASK16, 0xF0B8);
> +
> +	ret = qmc_setup_tsa(qmc);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	qmc_write16(qmc->scc_pram + QMC_GBL_QMCSTATE, 0x8000);
> +
> +	ret = qmc_setup_chans(qmc);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	/* Init interrupts table */
> +	ret = qmc_setup_ints(qmc);
> +	if (ret)
> +		goto err_tsa_serial_disconnect;
> +
> +	/* Disable and clear interrupts,  set the irq handler */
> +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0x0000);
> +	qmc_write16(qmc->scc_regs + SCC_SCCE, 0x000F);
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0)
> +		goto err_tsa_serial_disconnect;
> +	ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc",
> qmc);
> +	if (ret < 0)
> +		goto err_tsa_serial_disconnect;
> +
> +	/* Enable interrupts */
> +	qmc_write16(qmc->scc_regs + SCC_SCCM,
> +		SCC_SCCE_IQOV | SCC_SCCE_GINT | SCC_SCCE_GUN |
> SCC_SCCE_GOV);
> +
> +	ret = qmc_finalize_chans(qmc);
> +	if (ret < 0)
> +		goto err_disable_intr;
> +
> +	/* Enable transmiter and receiver */
> +	qmc_setbits32(qmc->scc_regs + SCC_GSMRL, SCC_GSMRL_ENR |
> SCC_GSMRL_ENT);
> +
> +	platform_set_drvdata(pdev, qmc);
> +
> +	return 0;
> +
> +err_disable_intr:
> +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0);
> +
> +err_tsa_serial_disconnect:
> +	tsa_serial_disconnect(qmc->tsa_serial);
> +	return ret;
> +}
> +
> +static int qmc_remove(struct platform_device *pdev)
> +{
> +	struct qmc *qmc = platform_get_drvdata(pdev);
> +
> +	/* Disable transmiter and receiver */
> +	qmc_setbits32(qmc->scc_regs + SCC_GSMRL, 0);
> +
> +	/* Disable interrupts */
> +	qmc_write16(qmc->scc_regs + SCC_SCCM, 0);
> +
> +	/* Disconnect the serial from TSA */
> +	tsa_serial_disconnect(qmc->tsa_serial);
> +
> +	return 0;
> +}
> +
> +static const struct of_device_id qmc_id_table[] = {
> +	{ .compatible = "fsl,cpm1-scc-qmc" },
> +	{} /* sentinel */
> +};
> +MODULE_DEVICE_TABLE(of, qmc_id_table);
> +
> +static struct platform_driver qmc_driver = {
> +	.driver = {
> +		.name = "fsl-qmc",
> +		.of_match_table = of_match_ptr(qmc_id_table),
> +	},
> +	.probe = qmc_probe,
> +	.remove = qmc_remove,
> +};
> +module_platform_driver(qmc_driver);
> +
> +struct qmc_chan *qmc_chan_get_byphandle(struct device_node *np,
> const char *phandle_name)
> +{
> +	struct of_phandle_args out_args;
> +	struct platform_device *pdev;
> +	struct qmc_chan *qmc_chan;
> +	struct qmc *qmc;
> +	int ret;
> +
> +	ret = of_parse_phandle_with_fixed_args(np, phandle_name, 1, 0,
> +					       &out_args);
> +	if (ret < 0)
> +		return ERR_PTR(ret);
> +
> +	if (!of_match_node(qmc_driver.driver.of_match_table,
> out_args.np)) {
> +		of_node_put(out_args.np);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	pdev = of_find_device_by_node(out_args.np);
> +	of_node_put(out_args.np);
> +	if (!pdev)
> +		return ERR_PTR(-ENODEV);
> +
> +	qmc = platform_get_drvdata(pdev);
> +	if (!qmc) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EPROBE_DEFER);
> +	}
> +
> +	if (out_args.args_count != 1) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	if (out_args.args[0] >= ARRAY_SIZE(qmc->chans)) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	qmc_chan = qmc->chans[out_args.args[0]];
> +	if (!qmc_chan) {
> +		platform_device_put(pdev);
> +		return ERR_PTR(-ENOENT);
> +	}
> +
> +	return qmc_chan;
> +}
> +EXPORT_SYMBOL(qmc_chan_get_byphandle);
> +
> +void qmc_chan_put(struct qmc_chan *chan)
> +{
> +	put_device(chan->qmc->dev);
> +}
> +EXPORT_SYMBOL(qmc_chan_put);
> +
> +static void devm_qmc_chan_release(struct device *dev, void *res)
> +{
> +	struct qmc_chan **qmc_chan = res;
> +
> +	qmc_chan_put(*qmc_chan);
> +}
> +
> +struct qmc_chan *devm_qmc_chan_get_byphandle(struct device *dev,
> +					     struct device_node *np,
> +					     const char *phandle_name)
> +{
> +	struct qmc_chan *qmc_chan;
> +	struct qmc_chan **dr;
> +
> +	dr = devres_alloc(devm_qmc_chan_release, sizeof(*dr),
> GFP_KERNEL);
> +	if (!dr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	qmc_chan = qmc_chan_get_byphandle(np, phandle_name);
> +	if (!IS_ERR(qmc_chan)) {
> +		*dr = qmc_chan;
> +		devres_add(dev, dr);
> +	} else {
> +		devres_free(dr);
> +	}
> +
> +	return qmc_chan;
> +}
> +EXPORT_SYMBOL(devm_qmc_chan_get_byphandle);
> +
> +MODULE_AUTHOR("Herve Codina <herve.codina@bootlin.com>");
> +MODULE_DESCRIPTION("CPM QMC driver");
> +MODULE_LICENSE("GPL");
> diff --git a/include/soc/fsl/qe/qmc.h b/include/soc/fsl/qe/qmc.h
> new file mode 100644
> index 000000000000..3c61a50d2ae2
> --- /dev/null
> +++ b/include/soc/fsl/qe/qmc.h
> @@ -0,0 +1,71 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * QMC management
> + *
> + * Copyright 2022 CS GROUP France
> + *
> + * Author: Herve Codina <herve.codina@bootlin.com>
> + */
> +#ifndef __SOC_FSL_QMC_H__
> +#define __SOC_FSL_QMC_H__
> +
> +#include <linux/types.h>
> +
> +struct device_node;
> +struct device;
> +struct qmc_chan;
> +
> +struct qmc_chan *qmc_chan_get_byphandle(struct device_node *np,
> const char *phandle_name);
> +void qmc_chan_put(struct qmc_chan *chan);
> +struct qmc_chan *devm_qmc_chan_get_byphandle(struct device *dev,
> struct device_node *np,
> +					     const char *phandle_name);
> +
> +enum qmc_mode {
> +	QMC_TRANSPARENT,
> +	QMC_HDLC,
> +};
> +
> +struct qmc_chan_info {
> +	enum qmc_mode mode;
> +	unsigned long rx_fs_rate;
> +	unsigned long rx_bit_rate;
> +	u8 nb_rx_ts;
> +	unsigned long tx_fs_rate;
> +	unsigned long tx_bit_rate;
> +	u8 nb_tx_ts;
> +};
> +
> +int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info
> *info);
> +
> +struct qmc_chan_param {
> +	enum qmc_mode mode;
> +	union {
> +		struct {
> +			u16 max_rx_buf_size;
> +			u16 max_rx_frame_size;
> +			bool is_crc32;
> +		} hdlc;
> +		struct {
> +			u16 max_rx_buf_size;
> +		} transp;
> +	};
> +};
> +
> +int qmc_chan_set_param(struct qmc_chan *chan, const struct
> qmc_chan_param *param);
> +
> +int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> +			  void (*complete)(void *context), void *context);
> +
> +int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr,
> size_t length,
> +			 void (*complete)(void *context, size_t length),
> +			 void *context);
> +
> +#define QMC_CHAN_READ  (1<<0)
> +#define QMC_CHAN_WRITE (1<<1)
> +#define QMC_CHAN_ALL   (QMC_CHAN_READ | QMC_CHAN_WRITE)
> +
> +int qmc_chan_start(struct qmc_chan *chan, int direction);
> +int qmc_chan_stop(struct qmc_chan *chan, int direction);
> +int qmc_chan_reset(struct qmc_chan *chan, int direction);
> +
> +#endif /* __SOC_FSL_QMC_H__ */
> --
> 2.39.0
Christophe Leroy Feb. 16, 2023, 6:43 a.m. UTC | #6
Le 15/02/2023 à 23:44, Leo Li a écrit :
> 
> 
>> -----Original Message-----
>> From: Herve Codina <herve.codina@bootlin.com>
>> Sent: Thursday, January 26, 2023 2:32 AM
>> To: Herve Codina <herve.codina@bootlin.com>; Leo Li
>> <leoyang.li@nxp.com>; Rob Herring <robh+dt@kernel.org>; Krzysztof
>> Kozlowski <krzysztof.kozlowski+dt@linaro.org>; Liam Girdwood
>> <lgirdwood@gmail.com>; Mark Brown <broonie@kernel.org>; Christophe
>> Leroy <christophe.leroy@csgroup.eu>; Michael Ellerman
>> <mpe@ellerman.id.au>; Nicholas Piggin <npiggin@gmail.com>; Qiang Zhao
>> <qiang.zhao@nxp.com>; Jaroslav Kysela <perex@perex.cz>; Takashi Iwai
>> <tiwai@suse.com>; Shengjiu Wang <shengjiu.wang@gmail.com>; Xiubo Li
>> <Xiubo.Lee@gmail.com>; Fabio Estevam <festevam@gmail.com>; Nicolin
>> Chen <nicoleotsuka@gmail.com>
>> Cc: linuxppc-dev@lists.ozlabs.org; linux-arm-kernel@lists.infradead.org;
>> devicetree@vger.kernel.org; linux-kernel@vger.kernel.org; alsa-devel@alsa-
>> project.org; Thomas Petazzoni <thomas.petazzoni@bootlin.com>
>> Subject: [PATCH v4 06/10] soc: fsl: cmp1: Add support for QMC
> 
> Typo: cpm1
> 
>>
>> The QMC (QUICC Multichannel Controller) emulates up to 64 channels within
>> one serial controller using the same TDM physical interface routed from the
>> TSA.
>>
>> It is available in some	PowerQUICC SoC such as the
>> MPC885 or MPC866.
>>
>> It is also available on some Quicc Engine SoCs.
>> This current version support CPM1 SoCs only and some enhancement are
>> needed to support Quicc Engine SoCs.
>>
>> Signed-off-by: Herve Codina <herve.codina@bootlin.com>
> 
> Otherwise looks good to me.
> 
> Acked-by: Li Yang <leoyang.li@nxp.com>

Thanks for the review and the ack.

Were you also able to have a look at patch 2 which implements support 
for the timeslot assigner (TSA) ?

Christophe