Message ID | 20190616171024.22799-13-hegdevasant@linux.vnet.ibm.com |
---|---|
State | Superseded |
Headers | show |
Series | MPIPL support | expand |
Context | Check | Description |
---|---|---|
snowpatch_ozlabs/apply_patch | success | Successfully applied on branch master (dbf27b6c4af84addb36bd3be34f96580aba9c873) |
snowpatch_ozlabs/snowpatch_job_snowpatch-skiboot | fail | Test snowpatch/job/snowpatch-skiboot on branch master |
snowpatch_ozlabs/snowpatch_job_snowpatch-skiboot-dco | success | Signed-off-by present |
Vasant Hegde's on June 17, 2019 3:10 am: > This patch add new API to register for dump. > u64 opal_mpipl_update(u8 ops, u64 src, u64 dest, u64 size) > > ops : > OPAL_MPIPL_REGISTER_TAG > Kernel metadata pointer. Kernel will send this to OPAL during MPIPL > registration. Post MPIPL, kernel will request for this tag via > mpipl_query_tag API. > src = kernel metadata address > dest = ignore > size = ignore > > OPAL_MPIPL_ADD_RANGE > Add new entry to MPIPL table. Kernel will send src, dest and size. > During MPIPL content from source address is moved to destination address. > src = Source start address > dest = Destination start address > size = size > > OPAL_MPIPL_REMOVE_RANGE > Remove kernel requested entry from MPIPL table. > src = Source start address > dest = Destination start address > size = ignore > > OPAL_MPIPL_REMOVE_ALL > Remove all kernel passed entry from MPIPL table. > src = ignore > dest = ignore > size = ignore > > OPAL_MPIPL_FREE_PRESERVED_MEMORY > Post MPIPL, kernel will indicate OPAL that it has processed dump and > it can clear/release metadata area. > src = ignore > dest = ignore > size = ignore > > Return values: > OPAL_SUCCESS : Operation success > OPAL_PARAMETER : Payload passed invalid data > OPAL_RESOURCE : Ran out of MDST or MDDT table size > OPAL_HARDWARE : MPIPL not supported Thanks very much for working on this, I'm a lot happier with the API now. It would be important if everybody interested can take a look at the proposed DT and OPAL call APIs even if you aren't able to review code and details carefully. Do we undo a level of indirection and make each of these options its own OPAL_ call? I think that would be nicer and you'd get proper parameters, etc. Thanks, Nick > > Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> > ---
On 06/28/2019 07:09 AM, Nicholas Piggin wrote: > Vasant Hegde's on June 17, 2019 3:10 am: >> This patch add new API to register for dump. >> u64 opal_mpipl_update(u8 ops, u64 src, u64 dest, u64 size) >> >> ops : >> OPAL_MPIPL_REGISTER_TAG >> Kernel metadata pointer. Kernel will send this to OPAL during MPIPL >> registration. Post MPIPL, kernel will request for this tag via >> mpipl_query_tag API. >> src = kernel metadata address >> dest = ignore >> size = ignore >> >> OPAL_MPIPL_ADD_RANGE >> Add new entry to MPIPL table. Kernel will send src, dest and size. >> During MPIPL content from source address is moved to destination address. >> src = Source start address >> dest = Destination start address >> size = size >> >> OPAL_MPIPL_REMOVE_RANGE >> Remove kernel requested entry from MPIPL table. >> src = Source start address >> dest = Destination start address >> size = ignore >> >> OPAL_MPIPL_REMOVE_ALL >> Remove all kernel passed entry from MPIPL table. >> src = ignore >> dest = ignore >> size = ignore >> >> OPAL_MPIPL_FREE_PRESERVED_MEMORY >> Post MPIPL, kernel will indicate OPAL that it has processed dump and >> it can clear/release metadata area. >> src = ignore >> dest = ignore >> size = ignore >> >> Return values: >> OPAL_SUCCESS : Operation success >> OPAL_PARAMETER : Payload passed invalid data >> OPAL_RESOURCE : Ran out of MDST or MDDT table size >> OPAL_HARDWARE : MPIPL not supported > > Thanks very much for working on this, I'm a lot happier with the API > now. It would be important if everybody interested can take a look at > the proposed DT and OPAL call APIs even if you aren't able to review > code and details carefully. > Thanks! > Do we undo a level of indirection and make each of these options its > own OPAL_ call? I think that would be nicer and you'd get proper > parameters, etc. I don't think we gain much from individual APIs compared to what we have now. -Vasant
Vasant Hegde's on June 28, 2019 8:13 pm: > On 06/28/2019 07:09 AM, Nicholas Piggin wrote: >> Vasant Hegde's on June 17, 2019 3:10 am: >>> This patch add new API to register for dump. >>> u64 opal_mpipl_update(u8 ops, u64 src, u64 dest, u64 size) >>> >>> ops : >>> OPAL_MPIPL_REGISTER_TAG >>> Kernel metadata pointer. Kernel will send this to OPAL during MPIPL >>> registration. Post MPIPL, kernel will request for this tag via >>> mpipl_query_tag API. >>> src = kernel metadata address >>> dest = ignore >>> size = ignore >>> >>> OPAL_MPIPL_ADD_RANGE >>> Add new entry to MPIPL table. Kernel will send src, dest and size. >>> During MPIPL content from source address is moved to destination address. >>> src = Source start address >>> dest = Destination start address >>> size = size >>> >>> OPAL_MPIPL_REMOVE_RANGE >>> Remove kernel requested entry from MPIPL table. >>> src = Source start address >>> dest = Destination start address >>> size = ignore >>> >>> OPAL_MPIPL_REMOVE_ALL >>> Remove all kernel passed entry from MPIPL table. >>> src = ignore >>> dest = ignore >>> size = ignore >>> >>> OPAL_MPIPL_FREE_PRESERVED_MEMORY >>> Post MPIPL, kernel will indicate OPAL that it has processed dump and >>> it can clear/release metadata area. >>> src = ignore >>> dest = ignore >>> size = ignore >>> >>> Return values: >>> OPAL_SUCCESS : Operation success >>> OPAL_PARAMETER : Payload passed invalid data >>> OPAL_RESOURCE : Ran out of MDST or MDDT table size >>> OPAL_HARDWARE : MPIPL not supported >> >> Thanks very much for working on this, I'm a lot happier with the API >> now. It would be important if everybody interested can take a look at >> the proposed DT and OPAL call APIs even if you aren't able to review >> code and details carefully. >> > > Thanks! > >> Do we undo a level of indirection and make each of these options its >> own OPAL_ call? I think that would be nicer and you'd get proper >> parameters, etc. > > I don't think we gain much from individual APIs compared to what we have now. You get proper prototype types though, so the question is what do you gain from having a multiplexer API? Thanks, Nick
On 06/28/2019 04:40 PM, Nicholas Piggin wrote: > Vasant Hegde's on June 28, 2019 8:13 pm: >> On 06/28/2019 07:09 AM, Nicholas Piggin wrote: >>> Vasant Hegde's on June 17, 2019 3:10 am: >>>> This patch add new API to register for dump. >>>> u64 opal_mpipl_update(u8 ops, u64 src, u64 dest, u64 size) >>>> >>>> ops : >>>> OPAL_MPIPL_REGISTER_TAG >>>> Kernel metadata pointer. Kernel will send this to OPAL during MPIPL >>>> registration. Post MPIPL, kernel will request for this tag via >>>> mpipl_query_tag API. >>>> src = kernel metadata address >>>> dest = ignore >>>> size = ignore >>>> >>>> OPAL_MPIPL_ADD_RANGE >>>> Add new entry to MPIPL table. Kernel will send src, dest and size. >>>> During MPIPL content from source address is moved to destination address. >>>> src = Source start address >>>> dest = Destination start address >>>> size = size >>>> >>>> OPAL_MPIPL_REMOVE_RANGE >>>> Remove kernel requested entry from MPIPL table. >>>> src = Source start address >>>> dest = Destination start address >>>> size = ignore >>>> >>>> OPAL_MPIPL_REMOVE_ALL >>>> Remove all kernel passed entry from MPIPL table. >>>> src = ignore >>>> dest = ignore >>>> size = ignore >>>> >>>> OPAL_MPIPL_FREE_PRESERVED_MEMORY >>>> Post MPIPL, kernel will indicate OPAL that it has processed dump and >>>> it can clear/release metadata area. >>>> src = ignore >>>> dest = ignore >>>> size = ignore >>>> >>>> Return values: >>>> OPAL_SUCCESS : Operation success >>>> OPAL_PARAMETER : Payload passed invalid data >>>> OPAL_RESOURCE : Ran out of MDST or MDDT table size >>>> OPAL_HARDWARE : MPIPL not supported >>> >>> Thanks very much for working on this, I'm a lot happier with the API >>> now. It would be important if everybody interested can take a look at >>> the proposed DT and OPAL call APIs even if you aren't able to review >>> code and details carefully. >>> >> >> Thanks! >> >>> Do we undo a level of indirection and make each of these options its >>> own OPAL_ call? I think that would be nicer and you'd get proper >>> parameters, etc. >> >> I don't think we gain much from individual APIs compared to what we have now. > > You get proper prototype types though, so the question is what do > you gain from having a multiplexer API? Well, I don't really gain anything with either approach. IMO single API is better as its easy to manage one (and document just one API :-) ) -Vasant
diff --git a/core/opal-dump.c b/core/opal-dump.c index ea9d00ead..bef54ebe1 100644 --- a/core/opal-dump.c +++ b/core/opal-dump.c @@ -96,6 +96,102 @@ static int opal_mpipl_add_entry(u8 region, u64 src, u64 dest, u64 size) return OPAL_SUCCESS; } +/* Remove entry from source (MDST) table */ +static int opal_mpipl_remove_entry_mdst(bool remove_all, u8 region, u64 src) +{ + bool found = false; + int i, j; + struct mdst_table *tmp_mdst; + struct mdst_table *mdst = (void *)(MDST_TABLE_BASE); + + for (i = 0; i < ntuple_mdst->act_cnt;) { + if (mdst->data_region != region) { + mdst++; + i++; + continue; + } + + if (remove_all != true && mdst->addr != (src | HRMOR_BIT)) { + mdst++; + i++; + continue; + } + + tmp_mdst = mdst; + memset(tmp_mdst, 0, sizeof(struct mdst_table)); + + for (j = i; j < ntuple_mdst->act_cnt - 1; j++) { + memcpy((void *)tmp_mdst, + (void *)(tmp_mdst + 1), sizeof(struct mdst_table)); + tmp_mdst++; + memset(tmp_mdst, 0, sizeof(struct mdst_table)); + } + + ntuple_mdst->act_cnt--; + + if (remove_all == false) { + found = true; + break; + } + } /* end - for loop */ + + if (remove_all == false && found == false) { + prlog(PR_DEBUG, + "Source address [0x%llx] not found in MDST table\n", src); + return OPAL_PARAMETER; + } + + return OPAL_SUCCESS; +} + +/* Remove entry from destination (MDDT) table */ +static int opal_mpipl_remove_entry_mddt(bool remove_all, u8 region, u64 dest) +{ + bool found = false; + int i, j; + struct mddt_table *tmp_mddt; + struct mddt_table *mddt = (void *)(MDDT_TABLE_BASE); + + for (i = 0; i < ntuple_mddt->act_cnt;) { + if (mddt->data_region != region) { + mddt++; + i++; + continue; + } + + if (remove_all != true && mddt->addr != (dest | HRMOR_BIT)) { + mddt++; + i++; + continue; + } + + tmp_mddt = mddt; + memset(tmp_mddt, 0, sizeof(struct mddt_table)); + + for (j = i; j < ntuple_mddt->act_cnt - 1; j++) { + memcpy((void *)tmp_mddt, + (void *)(tmp_mddt + 1), sizeof(struct mddt_table)); + tmp_mddt++; + memset(tmp_mddt, 0, sizeof(struct mddt_table)); + } + + ntuple_mddt->act_cnt--; + + if (remove_all == false) { + found = true; + break; + } + } /* end - for loop */ + + if (remove_all == false && found == false) { + prlog(PR_DEBUG, + "Dest address [0x%llx] not found in MDDT table\n", dest); + return OPAL_PARAMETER; + } + + return OPAL_SUCCESS; +} + /* Register for OPAL dump. */ static void opal_mpipl_register(void) { @@ -120,6 +216,92 @@ static void opal_mpipl_register(void) SKIBOOT_BASE, opal_dest, opal_size); } +static int payload_mpipl_register(u64 src, u64 dest, u64 size) +{ + if (!opal_addr_valid((void *)src)) { + prlog(PR_DEBUG, "Invalid source address [0x%llx]\n", src); + return OPAL_PARAMETER; + } + + if (!opal_addr_valid((void *)dest)) { + prlog(PR_DEBUG, "Invalid dest address [0x%llx]\n", dest); + return OPAL_PARAMETER; + } + + if (size <= 0) { + prlog(PR_DEBUG, "Invalid size [0x%llx]\n", size); + return OPAL_PARAMETER; + } + + return opal_mpipl_add_entry(DUMP_REGION_KERNEL, src, dest, size); +} + +static int payload_mpipl_unregister(u64 src, u64 dest) +{ + int rc; + + /* Remove src from MDST table */ + rc = opal_mpipl_remove_entry_mdst(false, DUMP_REGION_KERNEL, src); + if (rc) + return rc; + + /* Remove dest from MDDT table */ + rc = opal_mpipl_remove_entry_mddt(false, DUMP_REGION_KERNEL, dest); + return rc; +} + +static int payload_mpipl_unregister_all(void) +{ + opal_mpipl_remove_entry_mdst(true, DUMP_REGION_KERNEL, 0); + opal_mpipl_remove_entry_mddt(true, DUMP_REGION_KERNEL, 0); + + return OPAL_SUCCESS; +} + +static int64_t opal_mpipl_update(enum mpipl_ops ops, + u64 src, u64 dest, u64 size) +{ + int rc; + + if (ops > OPAL_MPIPL_FREE_PRESERVED_MEMORY) { + prlog(PR_DEBUG, "Unsupported operation : 0x%x\n", ops); + return OPAL_PARAMETER; + } + + switch (ops) { + case OPAL_MPIPL_REGISTER_TAG: + mpipl_metadata->kernel_tag = src; + prlog(PR_NOTICE, "Payload sent metadata tag : 0x%llx\n", src); + rc = OPAL_SUCCESS; + break; + case OPAL_MPIPL_ADD_RANGE: + rc = payload_mpipl_register(src, dest, size); + if (!rc) + prlog(PR_NOTICE, "Payload registered for MPIPL\n"); + break; + case OPAL_MPIPL_REMOVE_RANGE: + rc = payload_mpipl_unregister(src, dest); + if (!rc) { + prlog(PR_NOTICE, "Payload removed entry from MPIPL." + "[src : 0x%llx, dest : 0x%llx]\n", src, dest); + } + break; + case OPAL_MPIPL_REMOVE_ALL: + rc = payload_mpipl_unregister_all(); + if (!rc) + prlog(PR_NOTICE, "Payload unregistered for MPIPL\n"); + break; + case OPAL_MPIPL_FREE_PRESERVED_MEMORY: + rc = OPAL_SUCCESS; + break; + default: + rc = OPAL_PARAMETER; + break; + } + + return rc; +} + void opal_mpipl_init(void) { void *mdst_base = (void *)MDST_TABLE_BASE; @@ -153,4 +335,7 @@ void opal_mpipl_init(void) ntuple_mddt->act_cnt = 0; opal_mpipl_register(); + + /* OPAL API for MPIPL update */ + opal_register(OPAL_MPIPL_UPDATE, opal_mpipl_update, 4); } diff --git a/include/opal-api.h b/include/opal-api.h index 0b0ae1969..bc6af2014 100644 --- a/include/opal-api.h +++ b/include/opal-api.h @@ -232,7 +232,8 @@ #define OPAL_XIVE_GET_VP_STATE 170 /* Get NVT state */ #define OPAL_NPU_MEM_ALLOC 171 #define OPAL_NPU_MEM_RELEASE 172 -#define OPAL_LAST 172 +#define OPAL_MPIPL_UPDATE 173 +#define OPAL_LAST 173 #define QUIESCE_HOLD 1 /* Spin all calls at entry */ #define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */ @@ -1215,6 +1216,15 @@ enum { OPAL_PCI_P2P_TARGET = 1, }; +/* MPIPL update operations */ +enum mpipl_ops { + OPAL_MPIPL_REGISTER_TAG = 0, + OPAL_MPIPL_ADD_RANGE = 1, + OPAL_MPIPL_REMOVE_RANGE = 2, + OPAL_MPIPL_REMOVE_ALL = 3, + OPAL_MPIPL_FREE_PRESERVED_MEMORY= 4, +}; + #endif /* __ASSEMBLY__ */ #endif /* __OPAL_API_H */ diff --git a/include/opal-dump.h b/include/opal-dump.h index 0838ee124..9f5f102be 100644 --- a/include/opal-dump.h +++ b/include/opal-dump.h @@ -34,6 +34,7 @@ #define DUMP_REGION_CONSOLE 0x01 #define DUMP_REGION_HBRT_LOG 0x02 #define DUMP_REGION_OPAL_MEMORY 0x03 +#define DUMP_REGION_KERNEL 0x80 /* Mainstore memory to be captured by FSP SYSDUMP */ #define DUMP_TYPE_SYSDUMP 0xF5
This patch add new API to register for dump. u64 opal_mpipl_update(u8 ops, u64 src, u64 dest, u64 size) ops : OPAL_MPIPL_REGISTER_TAG Kernel metadata pointer. Kernel will send this to OPAL during MPIPL registration. Post MPIPL, kernel will request for this tag via mpipl_query_tag API. src = kernel metadata address dest = ignore size = ignore OPAL_MPIPL_ADD_RANGE Add new entry to MPIPL table. Kernel will send src, dest and size. During MPIPL content from source address is moved to destination address. src = Source start address dest = Destination start address size = size OPAL_MPIPL_REMOVE_RANGE Remove kernel requested entry from MPIPL table. src = Source start address dest = Destination start address size = ignore OPAL_MPIPL_REMOVE_ALL Remove all kernel passed entry from MPIPL table. src = ignore dest = ignore size = ignore OPAL_MPIPL_FREE_PRESERVED_MEMORY Post MPIPL, kernel will indicate OPAL that it has processed dump and it can clear/release metadata area. src = ignore dest = ignore size = ignore Return values: OPAL_SUCCESS : Operation success OPAL_PARAMETER : Payload passed invalid data OPAL_RESOURCE : Ran out of MDST or MDDT table size OPAL_HARDWARE : MPIPL not supported Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> --- core/opal-dump.c | 185 ++++++++++++++++++++++++++++++++++++++++++++++++++++ include/opal-api.h | 12 +++- include/opal-dump.h | 1 + 3 files changed, 197 insertions(+), 1 deletion(-)