From patchwork Tue Oct 22 11:28:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Hegde X-Patchwork-Id: 1181255 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46yB9c3yTPz9s7T for ; Tue, 22 Oct 2019 22:29:24 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 46yB9c1yKkzDqKh for ; Tue, 22 Oct 2019 22:29:24 +1100 (AEDT) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=hegdevasant@linux.vnet.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 46yB9T3YNSzDqJN for ; Tue, 22 Oct 2019 22:29:16 +1100 (AEDT) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x9MBT7tD105543 for ; Tue, 22 Oct 2019 07:29:12 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 2vsy97upsd-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 22 Oct 2019 07:29:10 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 22 Oct 2019 12:28:35 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 22 Oct 2019 12:28:33 +0100 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x9MBSXIE52494466 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 22 Oct 2019 11:28:33 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0B1B511C054; Tue, 22 Oct 2019 11:28:33 +0000 (GMT) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E30B111C04A; Tue, 22 Oct 2019 11:28:31 +0000 (GMT) Received: from hegdevasant.in.ibm.com (unknown [9.193.108.153]) by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 22 Oct 2019 11:28:31 +0000 (GMT) From: Vasant Hegde To: skiboot@lists.ozlabs.org Date: Tue, 22 Oct 2019 16:58:30 +0530 X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19102211-0012-0000-0000-0000035B7406 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19102211-0013-0000-0000-000021969EAC Message-Id: <20191022112830.17065-1-hegdevasant@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-10-22_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910220105 Subject: [Skiboot] [PATCH] prd: Fix prd message queuing interface X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Vaidyanathan Srinivasan Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" OPAL_MSG_PRD interface can handle message size <= OPAL_MSG_FIXED_PARAMS_SIZE. But kernel prd driver had a bug where it will not copy partial data to user space. This will create problem as opal-prd daemon tries to read message continuously. Commit 9cae036fa fixed this issue by enhancing opal-prd to allocate bigger message size based on device tree. For backward compatability (new OPAL and old kernel/userspace) lets restrict OPAL_MSG_PRD messaging interface to send upto 32 bytes data. This is fine as most of the messages are less than 32 bytes except FSP - HBRT messages ...which is new feature. Cc: Jeremy Kerr Cc: Vaidyanathan Srinivasan Signed-off-by: Vasant Hegde --- hw/prd.c | 46 ++++++++++++++++++++-------------------------- 1 file changed, 20 insertions(+), 26 deletions(-) diff --git a/hw/prd.c b/hw/prd.c index d9198f0eb..9c4cb1f45 100644 --- a/hw/prd.c +++ b/hw/prd.c @@ -149,6 +149,22 @@ static void prd_msg_consumed(void *data, int status) unlock(&events_lock); } +/* + * OPAL_MSG_PRD interface can handle message size <= OPAL_MSG_FIXED_PARAMS_SIZE. + * But kernel prd driver had a bug where it will not copy partial data to user + * space. Use OPAL_MSG_PRD interface only if size is <= sizeof(opal_prg_msg). + */ +static inline int opal_queue_prd_msg(struct opal_prd_msg *msg) +{ + enum opal_msg_type msg_type = OPAL_MSG_PRD2; + + if (be16_to_cpu(msg->hdr.size) <= 0x20) + msg_type = OPAL_MSG_PRD; + + return _opal_queue_msg(msg_type, msg, prd_msg_consumed, + be16_to_cpu(msg->hdr.size), msg); +} + static int populate_ipoll_msg(struct opal_prd_msg *msg, uint32_t proc) { uint64_t ipoll_mask; @@ -224,8 +240,7 @@ static void send_next_pending_event(void) * disabled then we shouldn't propagate PRD events to the host. */ if (prd_enabled) { - rc = _opal_queue_msg(OPAL_MSG_PRD, prd_msg, prd_msg_consumed, - prd_msg->hdr.size, prd_msg); + rc = opal_queue_prd_msg(prd_msg); if (!rc) prd_msg_inuse = true; } @@ -327,7 +342,6 @@ void prd_fw_resp_fsp_response(int status) uint64_t fw_resp_len_old; int rc; uint16_t hdr_size; - enum opal_msg_type msg_type = OPAL_MSG_PRD2; lock(&events_lock); @@ -348,16 +362,7 @@ void prd_fw_resp_fsp_response(int status) prd_msg_fsp_req->hdr.size = cpu_to_be16(hdr_size); } - /* - * If prd message size is <= OPAL_MSG_FIXED_PARAMS_SIZE then use - * OPAL_MSG_PRD to pass data to kernel. So that it works fine on - * older kernel (which does not support OPAL_MSG_PRD2). - */ - if (prd_msg_fsp_req->hdr.size < OPAL_MSG_FIXED_PARAMS_SIZE) - msg_type = OPAL_MSG_PRD; - - rc = _opal_queue_msg(msg_type, prd_msg_fsp_req, prd_msg_consumed, - prd_msg_fsp_req->hdr.size, prd_msg_fsp_req); + rc = opal_queue_prd_msg(prd_msg_fsp_req); if (!rc) prd_msg_inuse = true; unlock(&events_lock); @@ -367,7 +372,6 @@ int prd_hbrt_fsp_msg_notify(void *data, u32 dsize) { int size; int rc = FSP_STATUS_GENERIC_FAILURE; - enum opal_msg_type msg_type = OPAL_MSG_PRD2; if (!prd_enabled || !prd_active) { prlog(PR_NOTICE, "PRD: %s: PRD daemon is not ready\n", @@ -407,16 +411,7 @@ int prd_hbrt_fsp_msg_notify(void *data, u32 dsize) prd_msg_fsp_notify->fw_notify.len = cpu_to_be64(dsize); memcpy(&(prd_msg_fsp_notify->fw_notify.data), data, dsize); - /* - * If prd message size is <= OPAL_MSG_FIXED_PARAMS_SIZE then use - * OPAL_MSG_PRD to pass data to kernel. So that it works fine on - * older kernel (which does not support OPAL_MSG_PRD2). - */ - if (prd_msg_fsp_notify->hdr.size < OPAL_MSG_FIXED_PARAMS_SIZE) - msg_type = OPAL_MSG_PRD; - - rc = _opal_queue_msg(msg_type, prd_msg_fsp_notify, - prd_msg_consumed, size, prd_msg_fsp_notify); + rc = opal_queue_prd_msg(prd_msg_fsp_notify); if (!rc) prd_msg_inuse = true; @@ -625,8 +620,7 @@ static int prd_msg_handle_firmware_req(struct opal_prd_msg *msg) } if (!rc) { - rc = _opal_queue_msg(OPAL_MSG_PRD, prd_msg, prd_msg_consumed, - prd_msg->hdr.size, prd_msg); + rc = opal_queue_prd_msg(prd_msg); if (rc) prd_msg_inuse = false; } else {