From patchwork Wed Dec 6 14:17:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shilpasri G Bhat X-Patchwork-Id: 845190 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ysLLj5XYhz9sNd for ; Thu, 7 Dec 2017 01:18:21 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3ysLLj3B5bzDrKB for ; Thu, 7 Dec 2017 01:18:21 +1100 (AEDT) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=shilpa.bhat@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3ysLKZ3SFnzDwNv for ; Thu, 7 Dec 2017 01:17:22 +1100 (AEDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vB6EFZKQ089323 for ; Wed, 6 Dec 2017 09:17:19 -0500 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0b-001b2d01.pphosted.com with ESMTP id 2epgm5ejpm-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 06 Dec 2017 09:17:19 -0500 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Dec 2017 14:17:17 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 6 Dec 2017 14:17:16 -0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id vB6EHGaJ47317122; Wed, 6 Dec 2017 14:17:16 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D2BEBA405D; Wed, 6 Dec 2017 14:11:41 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 723B6A4040; Wed, 6 Dec 2017 14:11:40 +0000 (GMT) Received: from oc4502181600.ibm.com (unknown [9.84.224.142]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 6 Dec 2017 14:11:40 +0000 (GMT) From: Shilpasri G Bhat To: skiboot@lists.ozlabs.org Date: Wed, 6 Dec 2017 19:47:02 +0530 X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1512569822-29803-1-git-send-email-shilpa.bhat@linux.vnet.ibm.com> References: <1512569822-29803-1-git-send-email-shilpa.bhat@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17120614-0020-0000-0000-000003D4FA47 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17120614-0021-0000-0000-0000426A81AE Message-Id: <1512569822-29803-3-git-send-email-shilpa.bhat@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-12-06_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1712060208 Subject: [Skiboot] [PATCH V2 2/2] opal-prd: occ: Add support for runtime OCC load/start in ZZ X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" This patch adds support to handle OCC load/start event from FSP/PRD. During IPL we send a success directly to FSP without invoking any HBRT load routines on recieving OCC load mbox message from FSP. At runtime we forward this event to host opal-prd. This patch provides support for invoking OCC load/start HBRT routines like load_pm_complex() and start_pm_complex() from opal-prd. Signed-off-by: Shilpasri G Bhat --- external/opal-prd/opal-prd.c | 96 +++++++++++++++++++++++++++++++++++++++++++- hw/occ.c | 62 +++++++++++++++++++++++----- hw/prd.c | 17 ++++++++ include/hostservices.h | 1 + include/opal-api.h | 2 + include/skiboot.h | 1 + 6 files changed, 167 insertions(+), 12 deletions(-) diff --git a/external/opal-prd/opal-prd.c b/external/opal-prd/opal-prd.c index fb573e4..5a15f1d 100644 --- a/external/opal-prd/opal-prd.c +++ b/external/opal-prd/opal-prd.c @@ -306,6 +306,8 @@ extern int call_sbe_message_passing(uint32_t i_chipId); extern uint64_t call_get_ipoll_events(void); extern int call_firmware_notify(uint64_t len, void *data); extern int call_reset_pm_complex(uint64_t chip); +extern int call_load_pm_complex(u64 chip, u64 homer, u64 occ_common, u32 mode); +extern int call_start_pm_complex(u64 chip); void hservice_puts(const char *str) { @@ -1421,6 +1423,61 @@ static int handle_msg_occ_error(struct opal_prd_ctx *ctx, return 0; } +static int pm_complex_load_start(void) +{ + struct prd_range *range; + u64 homer, occ_common; + int rc = -1, i; + + if (!hservice_runtime->load_pm_complex) { + pr_log_nocall("load_pm_complex"); + return rc; + } + + if (!hservice_runtime->start_pm_complex) { + pr_log_nocall("start_pm_complex"); + return rc; + } + + range = find_range("ibm,occ-common-area", 0); + if (!range) { + pr_log(LOG_ERR, "PM: ibm,occ-common-area not found"); + return rc; + } + occ_common = range->physaddr; + + for (i = 0; i < nr_chips; i++) { + range = find_range("ibm,homer-image", chips[i]); + if (!range) { + pr_log(LOG_ERR, "PM: ibm,homer-image not found 0x%lx", + chips[i]); + return -1; + } + homer = range->physaddr; + + pr_debug("PM: calling load_pm_complex(0x%lx, 0x%lx, 0x%lx, LOAD)", + chips[i], homer, occ_common); + rc = call_load_pm_complex(chips[i], homer, occ_common, 0); + if (rc) { + pr_log(LOG_ERR, "PM: Failed load_pm_complex(0x%lx) %m", + chips[i]); + return rc; + } + } + + for (i = 0; i < nr_chips; i++) { + pr_debug("PM: calling start_pm_complex(0x%lx)", chips[i]); + rc = call_start_pm_complex(chips[i]); + if (rc) { + pr_log(LOG_ERR, "PM: Failed start_pm_complex(0x%lx): %m", + chips[i]); + return rc; + } + } + + return rc; +} + static int pm_complex_reset(uint64_t chip) { int rc; @@ -1430,13 +1487,24 @@ static int pm_complex_reset(uint64_t chip) * BMC system -> process_occ_reset */ if (is_fsp_system()) { + int i; + if (!hservice_runtime->reset_pm_complex) { pr_log_nocall("reset_pm_complex"); return -1; } - pr_debug("PM: calling pm_complex_reset(%ld)", chip); - rc = call_reset_pm_complex(chip); + for (i = 0; i < nr_chips; i++) { + pr_debug("PM: calling pm_complex_reset(%ld)", chips[i]); + rc = call_reset_pm_complex(chip); + if (rc) { + pr_log(LOG_ERR, "PM: Failed pm_complex_reset(%ld): %m", + chips[i]); + return rc; + } + } + + rc = pm_complex_load_start(); } else { if (!hservice_runtime->process_occ_reset) { pr_log_nocall("process_occ_reset"); @@ -1542,6 +1610,27 @@ static int handle_msg_fsp_occ_reset(struct opal_prd_msg *msg) return rc; } +static int handle_msg_fsp_occ_load_start(struct opal_prd_msg *msg) +{ + struct opal_prd_msg omsg; + int rc; + + pr_debug("FW: FSP requested OCC load/start"); + rc = pm_complex_load_start(); + + omsg.hdr.type = OPAL_PRD_MSG_TYPE_FSP_OCC_LOAD_START_STATUS; + omsg.hdr.size = htobe16(sizeof(omsg)); + omsg.fsp_occ_reset_status.chip = msg->occ_reset.chip; + omsg.fsp_occ_reset_status.status = htobe64(rc); + + if (write(ctx->fd, &omsg, sizeof(omsg)) != sizeof(omsg)) { + pr_log(LOG_ERR, "FW: Failed to send FSP_OCC_LOAD_START_STATUS msg: %m"); + return -1; + } + + return rc; +} + static int handle_prd_msg(struct opal_prd_ctx *ctx, struct opal_prd_msg *msg) { int rc = -1; @@ -1565,6 +1654,9 @@ static int handle_prd_msg(struct opal_prd_ctx *ctx, struct opal_prd_msg *msg) case OPAL_PRD_MSG_TYPE_FSP_OCC_RESET: rc = handle_msg_fsp_occ_reset(msg); break; + case OPAL_PRD_MSG_TYPE_FSP_OCC_LOAD_START: + rc = handle_msg_fsp_occ_load_start(msg); + break; default: pr_log(LOG_WARNING, "Invalid incoming message type 0x%x", msg->hdr.type); diff --git a/hw/occ.c b/hw/occ.c index 3507d5f..f3f1231 100644 --- a/hw/occ.c +++ b/hw/occ.c @@ -1754,6 +1754,8 @@ void occ_poke_load_queue(void) } } +static u32 last_seq_id; +static bool in_ipl = true; static void occ_do_load(u8 scope, u32 dbob_id __unused, u32 seq_id) { struct fsp_msg *rsp; @@ -1786,15 +1788,25 @@ static void occ_do_load(u8 scope, u32 dbob_id __unused, u32 seq_id) return; if (proc_gen == proc_gen_p9) { - rc = -ENOMEM; - /* OCC is pre-loaded in P9, so send SUCCESS to FSP */ - rsp = fsp_mkmsg(FSP_CMD_LOAD_OCC_STAT, 2, 0, seq_id); - if (rsp) + if (in_ipl) { + /* OCC is pre-loaded in P9, so send SUCCESS to FSP */ + rsp = fsp_mkmsg(FSP_CMD_LOAD_OCC_STAT, 2, 0, seq_id); + if (!rsp) + return; + rc = fsp_queue_msg(rsp, fsp_freemsg); - if (rc) { - log_simple_error(&e_info(OPAL_RC_OCC_LOAD), - "OCC: Error %d queueing FSP OCC LOAD STATUS msg", rc); - fsp_freemsg(rsp); + if (rc) { + log_simple_error(&e_info(OPAL_RC_OCC_LOAD), + "OCC: Error %d queueing OCC LOAD STATUS msg", + rc); + fsp_freemsg(rsp); + } + in_ipl = false; + } else { + struct proc_chip *chip = next_chip(NULL); + + last_seq_id = seq_id; + prd_fsp_occ_load_start(chip->id); } return; } @@ -1843,8 +1855,6 @@ out: return rc; } -static u32 last_seq_id; - int fsp_occ_reset_status(u64 chipid, s64 status) { struct fsp_msg *stat; @@ -1881,6 +1891,38 @@ int fsp_occ_reset_status(u64 chipid, s64 status) return rc; } +int fsp_occ_load_start_status(u64 chipid, s64 status) +{ + struct fsp_msg *stat; + int rc = OPAL_NO_MEM; + int status_word = 0; + + if (status) { + struct proc_chip *chip = get_chip(chipid); + + if (!chip) + return OPAL_PARAMETER; + + status_word = 0xB500 | (chip->pcid & 0xff); + log_simple_error(&e_info(OPAL_RC_OCC_LOAD), + "OCC: Error %d in load/start OCC %lld\n", rc, + chipid); + } + + stat = fsp_mkmsg(FSP_CMD_LOAD_OCC_STAT, 2, status_word, last_seq_id); + if (!stat) + return rc; + + rc = fsp_queue_msg(stat, fsp_freemsg); + if (rc) { + fsp_freemsg(stat); + log_simple_error(&e_info(OPAL_RC_OCC_LOAD), + "OCC: Error %d queueing FSP OCC LOAD STATUS msg", rc); + } + + return rc; +} + static void occ_do_reset(u8 scope, u32 dbob_id, u32 seq_id) { struct fsp_msg *rsp, *stat; diff --git a/hw/prd.c b/hw/prd.c index 5f9758d..ad84dbd 100644 --- a/hw/prd.c +++ b/hw/prd.c @@ -31,6 +31,7 @@ enum events { EVENT_OCC_RESET = 1 << 2, EVENT_SBE_PASSTHROUGH = 1 << 3, EVENT_FSP_OCC_RESET = 1 << 4, + EVENT_FSP_OCC_LOAD_START = 1 << 5, }; static uint8_t events[MAX_CHIPS]; @@ -120,6 +121,10 @@ static void prd_msg_consumed(void *data) proc = msg->occ_reset.chip; event = EVENT_FSP_OCC_RESET; break; + case OPAL_PRD_MSG_TYPE_FSP_OCC_LOAD_START: + proc = msg->occ_reset.chip; + event = EVENT_FSP_OCC_LOAD_START; + break; default: prlog(PR_ERR, "PRD: invalid msg consumed, type: 0x%x\n", msg->hdr.type); @@ -197,6 +202,9 @@ static void send_next_pending_event(void) } else if (event & EVENT_FSP_OCC_RESET) { prd_msg->hdr.type = OPAL_PRD_MSG_TYPE_FSP_OCC_RESET; prd_msg->occ_reset.chip = proc; + } else if (event & EVENT_FSP_OCC_LOAD_START) { + prd_msg->hdr.type = OPAL_PRD_MSG_TYPE_FSP_OCC_LOAD_START; + prd_msg->occ_reset.chip = proc; } /* @@ -293,6 +301,11 @@ void prd_sbe_passthrough(uint32_t proc) prd_event(proc, EVENT_SBE_PASSTHROUGH); } +void prd_fsp_occ_load_start(uint32_t proc) +{ + prd_event(proc, EVENT_FSP_OCC_LOAD_START); +} + /* incoming message handlers */ static int prd_msg_handle_attn_ack(struct opal_prd_msg *msg) { @@ -452,6 +465,10 @@ static int64_t opal_prd_msg(struct opal_prd_msg *msg) rc = hservice_wakeup(msg->spl_wakeup.core, msg->spl_wakeup.mode); break; + case OPAL_PRD_MSG_TYPE_FSP_OCC_LOAD_START_STATUS: + rc = fsp_occ_load_start_status(msg->fsp_occ_reset_status.chip, + msg->fsp_occ_reset_status.status); + break; default: rc = OPAL_UNSUPPORTED; } diff --git a/include/hostservices.h b/include/hostservices.h index cca3a3a..3130ed0 100644 --- a/include/hostservices.h +++ b/include/hostservices.h @@ -41,5 +41,6 @@ int find_master_and_slave_occ(uint64_t **master, uint64_t **slave, int hservice_send_error_log(uint32_t plid, uint32_t dsize, void *data); int hservice_wakeup(u32 core, u32 mode); int fsp_occ_reset_status(u64 chipid, s64 status); +int fsp_occ_load_start_status(u64 chipid, s64 status); #endif /* __HOSTSERVICES_H */ diff --git a/include/opal-api.h b/include/opal-api.h index ef32f65..f0ed5f6 100644 --- a/include/opal-api.h +++ b/include/opal-api.h @@ -1064,6 +1064,8 @@ enum opal_prd_msg_type { OPAL_PRD_MSG_TYPE_FSP_OCC_RESET, /* HBRT <-- OPAL */ OPAL_PRD_MSG_TYPE_FSP_OCC_RESET_STATUS, /* HBRT --> OPAL */ OPAL_PRD_MSG_TYPE_CORE_SPECIAL_WAKEUP, /* HBRT --> OPAL */ + OPAL_PRD_MSG_TYPE_FSP_OCC_LOAD_START, /* HBRT <-- OPAL */ + OPAL_PRD_MSG_TYPE_FSP_OCC_LOAD_START_STATUS, /* HBRT --> OPAL */ }; struct opal_prd_msg_header { diff --git a/include/skiboot.h b/include/skiboot.h index 795ee4f..03b82a8 100644 --- a/include/skiboot.h +++ b/include/skiboot.h @@ -294,6 +294,7 @@ extern void prd_sbe_passthrough(uint32_t proc); extern void prd_init(void); extern void prd_register_reserved_memory(void); extern void prd_fsp_occ_reset(uint32_t proc); +extern void prd_fsp_occ_load_start(u32 proc); /* Flatten device-tree */ extern void *create_dtb(const struct dt_node *root, bool exclusive);