mbox series

[v3,0/7] sunxi: Add DT representation for the MBUS controller

Message ID cover.f2e5fbb4d21384168973fa32377a33d627768146.1549897336.git-series.maxime.ripard@bootlin.com
Headers show
Series sunxi: Add DT representation for the MBUS controller | expand

Message

Maxime Ripard Feb. 11, 2019, 3:02 p.m. UTC
Hi,

We've had for quite some time to hack around in our drivers to take into
account the fact that our DMA accesses are not done through the parent
node, but through another bus with a different mapping than the CPU for the
RAM (0 instead of 0x40000000 for most SoCs).

After some discussion after the submission of a camera device suffering of
the same hacks, I've decided to put together a serie that introduce a
special interconnect name called "dma" that that allows to express the DMA
relationship between a master and its bus, even if they are not direct
parents in the DT.

Let me know what you think,
Maxime

Changes from v2:
  - Rewrite commit logs still mentionning dma-parent
  - Removed dma-parent-cells left over in the binding example
  - Removed dma-parent still being mentionned in comments

Changes from v1:
  - Change to use the now merged interconnect bindings
  - Move the DMA parent retrieval logic to its own function
  - Rebase on top of 5.0

Maxime Ripard (7):
  dt-bindings: interconnect: Add a dma interconnect name
  dt-bindings: bus: Add binding for the Allwinner MBUS controller
  of: address: Add parent pointer to the __of_translate_address args
  of: address: Add support for the parent DMA bus
  drm/sun4i: Rely on dma interconnect for our RAM offset
  clk: sunxi-ng: sun5i: Export the MBUS clock
  ARM: dts: sun5i: Add the MBUS controller

 Documentation/devicetree/bindings/arm/sunxi/sunxi-mbus.txt      | 36 +++++-
 Documentation/devicetree/bindings/interconnect/interconnect.txt |  3 +-
 arch/arm/boot/dts/sun5i.dtsi                                    | 13 ++-
 drivers/clk/sunxi-ng/ccu-sun5i.h                                |  4 +-
 drivers/gpu/drm/sun4i/sun4i_backend.c                           | 28 +++-
 drivers/of/address.c                                            | 49 +++++--
 include/dt-bindings/clock/sun5i-ccu.h                           |  2 +-
 7 files changed, 114 insertions(+), 21 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/arm/sunxi/sunxi-mbus.txt

base-commit: 87e87c7b0eeb3c9e08cdfe28fd540247bdf31ef5

Comments

Robin Murphy Feb. 12, 2019, 6:46 p.m. UTC | #1
On 11/02/2019 15:02, Maxime Ripard wrote:
> Now that we can express our DMA topology, rely on those property instead of
> hardcoding an offset from the dma_addr_t which wasn't really great.
> 
> We still need to add some code to deal with the old DT that would lack that
> property, but we move the offset to the DRM device dma_pfn_offset to be
> able to rely on just the dma_addr_t associated to the GEM object.
> 
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
> ---
>   drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
>   1 file changed, 21 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
> index 9e9255ee59cd..1846a1b30fea 100644
> --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
> +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
> @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
>   	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
>   	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
>   
> -	/*
> -	 * backend DMA accesses DRAM directly, bypassing the system
> -	 * bus. As such, the address range is different and the buffer
> -	 * address needs to be corrected.
> -	 */
> -	paddr -= PHYS_OFFSET;
> -
>   	if (fb->format->is_yuv)
>   		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
>   
> @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
>   	dev_set_drvdata(dev, backend);
>   	spin_lock_init(&backend->frontend_lock);
>   
> +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
> +		/*
> +		 * This assume we have the same DMA constraints for all our the
> +		 * devices in our pipeline (all the backends, but also the
> +		 * frontends). This sounds bad, but it has always been the case
> +		 * for us, and DRM doesn't do per-device allocation either, so
> +		 * we would need to fix DRM first...
> +		 */
> +		ret = of_dma_configure(drm->dev, dev->of_node, true);

It would be even nicer if we could ensure that drm->dev originates from 
a DT node which has the appropriate interconnects property itself, such 
that we can assume it's already configured correctly.

Robin.

> +		if (ret)
> +			return ret;
> +	} else {
> +		/*
> +		 * If we don't have the interconnect property, most likely
> +		 * because of an old DT, we need to set the DMA offset by hand
> +		 * on our device since the RAM mapping is at 0 for the DMA bus,
> +		 * unlike the CPU.
> +		 */
> +		drm->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
> +	}
> +
>   	backend->engine.node = dev->of_node;
>   	backend->engine.ops = &sun4i_backend_engine_ops;
>   	backend->engine.id = sun4i_backend_of_get_id(dev->of_node);
>
Maxime Ripard Feb. 13, 2019, 3:41 p.m. UTC | #2
Hi Robin,

Thanks for your feedback!

On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
> On 11/02/2019 15:02, Maxime Ripard wrote:
> > Now that we can express our DMA topology, rely on those property instead of
> > hardcoding an offset from the dma_addr_t which wasn't really great.
> > 
> > We still need to add some code to deal with the old DT that would lack that
> > property, but we move the offset to the DRM device dma_pfn_offset to be
> > able to rely on just the dma_addr_t associated to the GEM object.
> > 
> > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
> > ---
> >   drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
> >   1 file changed, 21 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > index 9e9255ee59cd..1846a1b30fea 100644
> > --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
> > +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
> >   	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
> >   	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
> > -	/*
> > -	 * backend DMA accesses DRAM directly, bypassing the system
> > -	 * bus. As such, the address range is different and the buffer
> > -	 * address needs to be corrected.
> > -	 */
> > -	paddr -= PHYS_OFFSET;
> > -
> >   	if (fb->format->is_yuv)
> >   		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
> > @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
> >   	dev_set_drvdata(dev, backend);
> >   	spin_lock_init(&backend->frontend_lock);
> > +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
> > +		/*
> > +		 * This assume we have the same DMA constraints for all our the
> > +		 * devices in our pipeline (all the backends, but also the
> > +		 * frontends). This sounds bad, but it has always been the case
> > +		 * for us, and DRM doesn't do per-device allocation either, so
> > +		 * we would need to fix DRM first...
> > +		 */
> > +		ret = of_dma_configure(drm->dev, dev->of_node, true);
> 
> It would be even nicer if we could ensure that drm->dev originates from a DT
> node which has the appropriate interconnects property itself, such that we
> can assume it's already configured correctly.

The thing is drm->dev comes from a node in the DT that is a virtual
node, and therefore doesn't have any resources attached, so I'm not
sure we have any other way, unfortunately.

Maxime
Robin Murphy Feb. 13, 2019, 4:40 p.m. UTC | #3
On 13/02/2019 15:41, Maxime Ripard wrote:
> Hi Robin,
> 
> Thanks for your feedback!
> 
> On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
>> On 11/02/2019 15:02, Maxime Ripard wrote:
>>> Now that we can express our DMA topology, rely on those property instead of
>>> hardcoding an offset from the dma_addr_t which wasn't really great.
>>>
>>> We still need to add some code to deal with the old DT that would lack that
>>> property, but we move the offset to the DRM device dma_pfn_offset to be
>>> able to rely on just the dma_addr_t associated to the GEM object.
>>>
>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
>>> ---
>>>    drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
>>>    1 file changed, 21 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>> index 9e9255ee59cd..1846a1b30fea 100644
>>> --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
>>> +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>> @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
>>>    	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
>>>    	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
>>> -	/*
>>> -	 * backend DMA accesses DRAM directly, bypassing the system
>>> -	 * bus. As such, the address range is different and the buffer
>>> -	 * address needs to be corrected.
>>> -	 */
>>> -	paddr -= PHYS_OFFSET;
>>> -
>>>    	if (fb->format->is_yuv)
>>>    		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
>>> @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
>>>    	dev_set_drvdata(dev, backend);
>>>    	spin_lock_init(&backend->frontend_lock);
>>> +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
>>> +		/*
>>> +		 * This assume we have the same DMA constraints for all our the
>>> +		 * devices in our pipeline (all the backends, but also the
>>> +		 * frontends). This sounds bad, but it has always been the case
>>> +		 * for us, and DRM doesn't do per-device allocation either, so
>>> +		 * we would need to fix DRM first...
>>> +		 */
>>> +		ret = of_dma_configure(drm->dev, dev->of_node, true);
>>
>> It would be even nicer if we could ensure that drm->dev originates from a DT
>> node which has the appropriate interconnects property itself, such that we
>> can assume it's already configured correctly.
> 
> The thing is drm->dev comes from a node in the DT that is a virtual
> node, and therefore doesn't have any resources attached, so I'm not
> sure we have any other way, unfortunately.

Right, I appreciate that it may not be feasible to swizzle drm->dev for 
one of the 'real' component devices, but what I was also thinking was 
that since the virtual device node effectively represents the 
aggregation of the other component devices, we could just say that it 
also has to have its own link to the MBUS interconnect (with the ID of 
the pipeline entrypoint it's associated with, I guess). That ought to be 
enough to get dma_configure() to do the job, and in fairness is no 
*less* accurate a description of the hardware, even if might look a 
little funky to some.

Robin.
Maxime Ripard Feb. 19, 2019, 10:55 a.m. UTC | #4
Hi Robin,

On Wed, Feb 13, 2019 at 04:40:11PM +0000, Robin Murphy wrote:
> On 13/02/2019 15:41, Maxime Ripard wrote:
> > Hi Robin,
> > 
> > Thanks for your feedback!
> > 
> > On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
> > > On 11/02/2019 15:02, Maxime Ripard wrote:
> > > > Now that we can express our DMA topology, rely on those property instead of
> > > > hardcoding an offset from the dma_addr_t which wasn't really great.
> > > > 
> > > > We still need to add some code to deal with the old DT that would lack that
> > > > property, but we move the offset to the DRM device dma_pfn_offset to be
> > > > able to rely on just the dma_addr_t associated to the GEM object.
> > > > 
> > > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
> > > > ---
> > > >    drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
> > > >    1 file changed, 21 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > > > index 9e9255ee59cd..1846a1b30fea 100644
> > > > --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
> > > > +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > > > @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
> > > >    	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
> > > >    	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
> > > > -	/*
> > > > -	 * backend DMA accesses DRAM directly, bypassing the system
> > > > -	 * bus. As such, the address range is different and the buffer
> > > > -	 * address needs to be corrected.
> > > > -	 */
> > > > -	paddr -= PHYS_OFFSET;
> > > > -
> > > >    	if (fb->format->is_yuv)
> > > >    		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
> > > > @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
> > > >    	dev_set_drvdata(dev, backend);
> > > >    	spin_lock_init(&backend->frontend_lock);
> > > > +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
> > > > +		/*
> > > > +		 * This assume we have the same DMA constraints for all our the
> > > > +		 * devices in our pipeline (all the backends, but also the
> > > > +		 * frontends). This sounds bad, but it has always been the case
> > > > +		 * for us, and DRM doesn't do per-device allocation either, so
> > > > +		 * we would need to fix DRM first...
> > > > +		 */
> > > > +		ret = of_dma_configure(drm->dev, dev->of_node, true);
> > > 
> > > It would be even nicer if we could ensure that drm->dev originates from a DT
> > > node which has the appropriate interconnects property itself, such that we
> > > can assume it's already configured correctly.
> > 
> > The thing is drm->dev comes from a node in the DT that is a virtual
> > node, and therefore doesn't have any resources attached, so I'm not
> > sure we have any other way, unfortunately.
> 
> Right, I appreciate that it may not be feasible to swizzle drm->dev for one
> of the 'real' component devices, but what I was also thinking was that since
> the virtual device node effectively represents the aggregation of the other
> component devices, we could just say that it also has to have its own link
> to the MBUS interconnect (with the ID of the pipeline entrypoint it's
> associated with, I guess). That ought to be enough to get dma_configure() to
> do the job, and in fairness is no *less* accurate a description of the
> hardware, even if might look a little funky to some.

That virtual device however can work with up to 4 devices that can
perform DMA operations, all of them going through a different port of
the MBUS controller that can account for DMA usage on a per-master
basis.

Eventually, we should support that feature as well, but the point is
that from a DT point of view, there's not a single point of DMA access
for that particular device but more likely 2-4 points, each with a
different argument to the phandle.

We could of course have a hack and use only one of them, but that
would be less accurate than what we currently have. Plus, this is
really due to a restriction on the DRM side, that could change in the
future (even though it's unlikely). Fixing that restriction would make
that property completely useless, confusing and wrong from an hardware
PoV.

Maxime
Robin Murphy March 5, 2019, 4:11 p.m. UTC | #5
On 19/02/2019 10:55, Maxime Ripard wrote:
> Hi Robin,
> 
> On Wed, Feb 13, 2019 at 04:40:11PM +0000, Robin Murphy wrote:
>> On 13/02/2019 15:41, Maxime Ripard wrote:
>>> Hi Robin,
>>>
>>> Thanks for your feedback!
>>>
>>> On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
>>>> On 11/02/2019 15:02, Maxime Ripard wrote:
>>>>> Now that we can express our DMA topology, rely on those property instead of
>>>>> hardcoding an offset from the dma_addr_t which wasn't really great.
>>>>>
>>>>> We still need to add some code to deal with the old DT that would lack that
>>>>> property, but we move the offset to the DRM device dma_pfn_offset to be
>>>>> able to rely on just the dma_addr_t associated to the GEM object.
>>>>>
>>>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>>> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
>>>>> ---
>>>>>     drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
>>>>>     1 file changed, 21 insertions(+), 7 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>>>> index 9e9255ee59cd..1846a1b30fea 100644
>>>>> --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
>>>>> +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>>>> @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
>>>>>     	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
>>>>>     	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
>>>>> -	/*
>>>>> -	 * backend DMA accesses DRAM directly, bypassing the system
>>>>> -	 * bus. As such, the address range is different and the buffer
>>>>> -	 * address needs to be corrected.
>>>>> -	 */
>>>>> -	paddr -= PHYS_OFFSET;
>>>>> -
>>>>>     	if (fb->format->is_yuv)
>>>>>     		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
>>>>> @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
>>>>>     	dev_set_drvdata(dev, backend);
>>>>>     	spin_lock_init(&backend->frontend_lock);
>>>>> +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
>>>>> +		/*
>>>>> +		 * This assume we have the same DMA constraints for all our the
>>>>> +		 * devices in our pipeline (all the backends, but also the
>>>>> +		 * frontends). This sounds bad, but it has always been the case
>>>>> +		 * for us, and DRM doesn't do per-device allocation either, so
>>>>> +		 * we would need to fix DRM first...
>>>>> +		 */
>>>>> +		ret = of_dma_configure(drm->dev, dev->of_node, true);
>>>>
>>>> It would be even nicer if we could ensure that drm->dev originates from a DT
>>>> node which has the appropriate interconnects property itself, such that we
>>>> can assume it's already configured correctly.
>>>
>>> The thing is drm->dev comes from a node in the DT that is a virtual
>>> node, and therefore doesn't have any resources attached, so I'm not
>>> sure we have any other way, unfortunately.
>>
>> Right, I appreciate that it may not be feasible to swizzle drm->dev for one
>> of the 'real' component devices, but what I was also thinking was that since
>> the virtual device node effectively represents the aggregation of the other
>> component devices, we could just say that it also has to have its own link
>> to the MBUS interconnect (with the ID of the pipeline entrypoint it's
>> associated with, I guess). That ought to be enough to get dma_configure() to
>> do the job, and in fairness is no *less* accurate a description of the
>> hardware, even if might look a little funky to some.
> 
> That virtual device however can work with up to 4 devices that can
> perform DMA operations, all of them going through a different port of
> the MBUS controller that can account for DMA usage on a per-master
> basis.
> 
> Eventually, we should support that feature as well, but the point is
> that from a DT point of view, there's not a single point of DMA access
> for that particular device but more likely 2-4 points, each with a
> different argument to the phandle.
> 
> We could of course have a hack and use only one of them, but that
> would be less accurate than what we currently have. Plus, this is
> really due to a restriction on the DRM side, that could change in the
> future (even though it's unlikely). Fixing that restriction would make
> that property completely useless, confusing and wrong from an hardware
> PoV.

Thanks for the clarification. Much as I'd like to keep 
of_dma_configure() out of drivers as much as possible, I agree that it's 
not sufficient justification for (ab)using DT to sidestep a 
Linux-specific issue which might eventually get properly fixed anyway. 
I'm sure the last thing sunxi needs is any more awkward DT ABI concerns :)

Robin.