diff mbox

powerpc/powernv: handle OPAL_SUCCESS return in opal_sensor_read

Message ID 1427305834-6737-1-git-send-email-clg@fr.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

Cédric Le Goater March 25, 2015, 5:50 p.m. UTC
Currently, when a sensor value is read, the kernel calls OPAL, which in
turn builds a message for the FSP, and waits for a message back. 

The new device tree for OPAL sensors [1] adds new sensors that can be 
read synchronously (core temperatures for instance) and that don't need 
to wait for a response.

This patch modifies the opal call to accept an OPAL_SUCCESS return value
and cover the case above.

[1] https://lists.ozlabs.org/pipermail/skiboot/2015-March/000639.html

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

 We still uselessly reserve a token (for the response) and take a
 lock, which might raise the need of a new 'opal_sensor_read_sync' 
 call.

 arch/powerpc/platforms/powernv/opal-sensor.c |   29 +++++++++++++++++---------
 1 file changed, 19 insertions(+), 10 deletions(-)

Comments

Stewart Smith March 25, 2015, 11:07 p.m. UTC | #1
Cédric Le Goater <clg@fr.ibm.com> writes:
> Currently, when a sensor value is read, the kernel calls OPAL, which in
> turn builds a message for the FSP, and waits for a message back. 
>
> The new device tree for OPAL sensors [1] adds new sensors that can be 
> read synchronously (core temperatures for instance) and that don't need 
> to wait for a response.
>
> This patch modifies the opal call to accept an OPAL_SUCCESS return value
> and cover the case above.
>
> [1] https://lists.ozlabs.org/pipermail/skiboot/2015-March/000639.html
>
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
>
>  We still uselessly reserve a token (for the response) and take a
>  lock, which might raise the need of a new 'opal_sensor_read_sync' 
>  call.

Actually.... why do we take a lock around the OPAL calls at all?
Cédric Le Goater March 26, 2015, 9:44 a.m. UTC | #2
On 03/26/2015 12:07 AM, Stewart Smith wrote:
> Cédric Le Goater <clg@fr.ibm.com> writes:
>> Currently, when a sensor value is read, the kernel calls OPAL, which in
>> turn builds a message for the FSP, and waits for a message back. 
>>
>> The new device tree for OPAL sensors [1] adds new sensors that can be 
>> read synchronously (core temperatures for instance) and that don't need 
>> to wait for a response.
>>
>> This patch modifies the opal call to accept an OPAL_SUCCESS return value
>> and cover the case above.
>>
>> [1] https://lists.ozlabs.org/pipermail/skiboot/2015-March/000639.html
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>>  We still uselessly reserve a token (for the response) and take a
>>  lock, which might raise the need of a new 'opal_sensor_read_sync' 
>>  call.
> 
> Actually.... why do we take a lock around the OPAL calls at all?

The sensor service in OPAL only handles one FSP request at a time and 
returns OPAL_BUSY if one is already in progress. The lock covers this case 
but we could also remove it return EBUSY to the driver or even retry the 
call. That might be dangerous though. 

Changing OPAL to handle simultaneously multiple requests does not seem really 
necessary, it won't speed up the communication with the FSP and that is the
main bottleneck.

C.
Cédric Le Goater March 26, 2015, 12:58 p.m. UTC | #3
On 03/26/2015 10:44 AM, Cedric Le Goater wrote:
> On 03/26/2015 12:07 AM, Stewart Smith wrote:
>> Cédric Le Goater <clg@fr.ibm.com> writes:
>>> Currently, when a sensor value is read, the kernel calls OPAL, which in
>>> turn builds a message for the FSP, and waits for a message back. 
>>>
>>> The new device tree for OPAL sensors [1] adds new sensors that can be 
>>> read synchronously (core temperatures for instance) and that don't need 
>>> to wait for a response.
>>>
>>> This patch modifies the opal call to accept an OPAL_SUCCESS return value
>>> and cover the case above.
>>>
>>> [1] https://lists.ozlabs.org/pipermail/skiboot/2015-March/000639.html
>>>
>>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>>> ---
>>>
>>>  We still uselessly reserve a token (for the response) and take a
>>>  lock, which might raise the need of a new 'opal_sensor_read_sync' 
>>>  call.
>>
>> Actually.... why do we take a lock around the OPAL calls at all?
> 
> The sensor service in OPAL only handles one FSP request at a time and 
> returns OPAL_BUSY if one is already in progress. The lock covers this case 
> but we could also remove it return EBUSY to the driver or even retry the 
> call. That might be dangerous though. 
> 
> Changing OPAL to handle simultaneously multiple requests does not seem really 
> necessary, it won't speed up the communication with the FSP and that is the
> main bottleneck.

opal_get_sensor_data() is mixing OPAL return codes and errnos. I will send
a v2 addressing this problem first.

C.
Stewart Smith March 27, 2015, 6:05 a.m. UTC | #4
Cedric Le Goater <clg@fr.ibm.com> writes:
> The sensor service in OPAL only handles one FSP request at a time and 
> returns OPAL_BUSY if one is already in progress. The lock covers this case 
> but we could also remove it return EBUSY to the driver or even retry the 
> call. That might be dangerous though. 

Retrying the call should be okay.

Just because FSP wants to do things serially doesn't mean non-FSP does :)

> Changing OPAL to handle simultaneously multiple requests does not seem really 
> necessary, it won't speed up the communication with the FSP and that is the
> main bottleneck.

Only on FSP systems though, and all of the OpenPower machines don't have
FSPs :)
diff mbox

Patch

diff --git a/arch/powerpc/platforms/powernv/opal-sensor.c b/arch/powerpc/platforms/powernv/opal-sensor.c
index 4ab67ef7abc9..99d6d9a371ab 100644
--- a/arch/powerpc/platforms/powernv/opal-sensor.c
+++ b/arch/powerpc/platforms/powernv/opal-sensor.c
@@ -46,18 +46,27 @@  int opal_get_sensor_data(u32 sensor_hndl, u32 *sensor_data)
 
 	mutex_lock(&opal_sensor_mutex);
 	ret = opal_sensor_read(sensor_hndl, token, &data);
-	if (ret != OPAL_ASYNC_COMPLETION)
-		goto out_token;
+	switch (ret) {
+	case OPAL_ASYNC_COMPLETION:
+		ret = opal_async_wait_response(token, &msg);
+		if (ret) {
+			pr_err("%s: Failed to wait for the async response, %d\n",
+			       __func__, ret);
+			goto out_token;
+		}
 
-	ret = opal_async_wait_response(token, &msg);
-	if (ret) {
-		pr_err("%s: Failed to wait for the async response, %d\n",
-				__func__, ret);
-		goto out_token;
-	}
+		ret = be64_to_cpu(msg.params[1]);
+
+		*sensor_data = be32_to_cpu(data);
+		break;
 
-	*sensor_data = be32_to_cpu(data);
-	ret = be64_to_cpu(msg.params[1]);
+	case OPAL_SUCCESS:
+		*sensor_data = be32_to_cpu(data);
+		break;
+
+	default:
+		break;
+	}
 
 out_token:
 	mutex_unlock(&opal_sensor_mutex);