From patchwork Fri Oct 13 11:21:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Hegde X-Patchwork-Id: 825414 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yD50P5BGRz9s82 for ; Fri, 13 Oct 2017 22:22:13 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yD50P29YFzDr6N for ; Fri, 13 Oct 2017 22:22:13 +1100 (AEDT) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=hegdevasant@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yD50C5P5LzDqlv for ; Fri, 13 Oct 2017 22:22:02 +1100 (AEDT) Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9DBJ05j038220 for ; Fri, 13 Oct 2017 07:21:59 -0400 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0b-001b2d01.pphosted.com with ESMTP id 2djvef0eer-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 13 Oct 2017 07:21:59 -0400 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 13 Oct 2017 12:21:57 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 13 Oct 2017 12:21:56 +0100 Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9DBLsVk28967148 for ; Fri, 13 Oct 2017 11:21:55 GMT Received: from d23av01.au.ibm.com (localhost [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v9DBLsPZ018634 for ; Fri, 13 Oct 2017 22:21:54 +1100 Received: from hegdevasant.in.ibm.com ([9.199.177.209]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id v9DBLq0G018557; Fri, 13 Oct 2017 22:21:52 +1100 From: Vasant Hegde To: skiboot@lists.ozlabs.org Date: Fri, 13 Oct 2017 16:51:18 +0530 X-Mailer: git-send-email 2.9.3 X-TM-AS-MML: disable x-cbid: 17101311-0020-0000-0000-000003C07C21 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101311-0021-0000-0000-00004254131B Message-Id: <20171013112119.30164-1-hegdevasant@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-13_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=1 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710130159 Subject: [Skiboot] [PATCH v2 1/2] hdata: Add memory hierarchy under xscom node X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" We have memory to chip mapping but doesn't have complete memory hierarchy. This patch adds memory hierarchy under xscom node. This is specific to P9 system as these hierarchy may change between processor generation. It uses memory controller ID details and populates nodes like: xscom@/mcbist@/mcs@/mca@/dimm@ Also this patch adds few properties under dimm node. Finally make sure xscom nodes created before calling memory_parse(). Signed-off-by: Vasant Hegde --- hdata/memory.c | 130 ++++++++++++++++++++++++++++++++++++++++++- hdata/spira.c | 6 +- hdata/test/p8-840-spira.dts | 10 ++-- hdata/test/p81-811.spira.dts | 20 +++---- 4 files changed, 146 insertions(+), 20 deletions(-) diff --git a/hdata/memory.c b/hdata/memory.c index dbb0ac4..74eedff 100644 --- a/hdata/memory.c +++ b/hdata/memory.c @@ -22,6 +22,7 @@ #include #include #include +#include #include "spira.h" #include "hdata.h" @@ -44,8 +45,13 @@ struct HDIF_ms_area_address_range { __be32 chip; __be32 mirror_attr; __be64 mirror_start; + __be32 controller_id; } __packed; +#define MS_CONTROLLER_MCBIST_ID(id) GETFIELD(PPC_BITMASK32(0, 1), id) +#define MS_CONTROLLER_MCS_ID(id) GETFIELD(PPC_BITMASK32(4, 7), id) +#define MS_CONTROLLER_MCA_ID(id) GETFIELD(PPC_BITMASK32(8, 15), id) + struct HDIF_ms_area_id { __be16 id; #define MS_PTYPE_RISER_CARD 0x8000 @@ -313,6 +319,121 @@ static void vpd_add_ram_area(const struct HDIF_common_hdr *msarea) } } +static void add_mca_dimm_info(struct dt_node *mca, + const struct HDIF_common_hdr *msarea) +{ + unsigned int i; + const struct HDIF_child_ptr *ramptr; + const struct HDIF_common_hdr *ramarea; + const struct spira_fru_id *fru_id; + const struct HDIF_ram_area_id *ram_id; + const struct HDIF_ram_area_size *ram_area_sz; + struct dt_node *dimm; + + ramptr = HDIF_child_arr(msarea, 0); + if (!CHECK_SPPTR(ramptr)) { + prerror("MS AREA: No RAM area at %p\n", msarea); + return; + } + + for (i = 0; i < be32_to_cpu(ramptr->count); i++) { + ramarea = HDIF_child(msarea, ramptr, i, "RAM "); + if (!CHECK_SPPTR(ramarea)) + continue; + + fru_id = HDIF_get_idata(ramarea, 0, NULL); + if (!fru_id) + continue; + + /* Use Resource ID to add dimm node */ + dimm = dt_find_by_name_addr(mca, "dimm", + be16_to_cpu(fru_id->rsrc_id)); + if (dimm) + continue; + dimm= dt_new_addr(mca, "dimm", be16_to_cpu(fru_id->rsrc_id)); + assert(dimm); + dt_add_property_cells(dimm, "reg", be16_to_cpu(fru_id->rsrc_id)); + + /* Add location code */ + slca_vpd_add_loc_code(dimm, be16_to_cpu(fru_id->slca_index)); + + /* DIMM size */ + ram_area_sz = HDIF_get_idata(ramarea, 3, NULL); + if (!CHECK_SPPTR(ram_area_sz)) + continue; + dt_add_property_cells(dimm, "size", be32_to_cpu(ram_area_sz->mb)); + + /* DIMM state */ + ram_id = HDIF_get_idata(ramarea, 2, NULL); + if (!CHECK_SPPTR(ram_id)) + continue; + + if ((be16_to_cpu(ram_id->flags) & RAM_AREA_INSTALLED) && + (be16_to_cpu(ram_id->flags) & RAM_AREA_FUNCTIONAL)) + dt_add_property_string(dimm, "status", "okay"); + else + dt_add_property_string(dimm, "status", "disabled"); + } +} + +static inline void dt_add_mem_reg_property(struct dt_node *node, u64 addr) +{ + dt_add_property_cells(node, "#address-cells", 1); + dt_add_property_cells(node, "#size-cells", 0); + dt_add_property_cells(node, "reg", addr); +} + +static void add_memory_controller(const struct HDIF_common_hdr *msarea, + const struct HDIF_ms_area_address_range *arange) +{ + uint32_t chip_id, version; + uint32_t controller_id, mcbist_id, mcs_id, mca_id; + struct dt_node *xscom, *mcbist, *mcs, *mca; + + /* + * Memory hierarchy may change between processor version. Presently + * its creating memory hierarchy for P9 (Nimbus) only. + */ + version = PVR_TYPE(mfspr(SPR_PVR)); + if (version != PVR_TYPE_P9) + return; + + chip_id = pcid_to_chip_id(be32_to_cpu(arange->chip)); + controller_id = be32_to_cpu(arange->controller_id); + xscom = find_xscom_for_chip(chip_id); + if (!xscom) { + prlog(PR_WARNING, + "MS AREA: Can't find XSCOM for chip %d\n", chip_id); + return; + } + + mcbist_id = MS_CONTROLLER_MCBIST_ID(controller_id); + mcbist = dt_find_by_name_addr(xscom, "mcbist", mcbist_id); + if (!mcbist) { + mcbist = dt_new_addr(xscom, "mcbist", mcbist_id); + assert(mcbist); + dt_add_mem_reg_property(mcbist, mcbist_id); + } + + mcs_id = MS_CONTROLLER_MCS_ID(controller_id); + mcs = dt_find_by_name_addr(mcbist, "mcs", mcs_id); + if (!mcs) { + mcs = dt_new_addr(mcbist, "mcs", mcs_id); + assert(mcs); + dt_add_mem_reg_property(mcs, mcs_id); + } + + mca_id = MS_CONTROLLER_MCA_ID(controller_id); + mca = dt_find_by_name_addr(mcs, "mca", mca_id); + if (!mca) { + mca = dt_new_addr(mcs, "mca", mca_id); + assert(mca); + dt_add_mem_reg_property(mca, mca_id); + } + + add_mca_dimm_info(mca, msarea); +} + static void get_msareas(struct dt_node *root, const struct HDIF_common_hdr *ms_vpd) { @@ -332,7 +453,7 @@ static void get_msareas(struct dt_node *root, const struct HDIF_ms_area_address_range *arange; const struct HDIF_ms_area_id *id; const void *fruid; - unsigned int size, j; + unsigned int size, j, offset; u16 flags; msarea = HDIF_child(ms_vpd, msptr, i, "MSAREA"); @@ -372,7 +493,8 @@ static void get_msareas(struct dt_node *root, return; } - if (be32_to_cpu(arr->eactsz) < sizeof(*arange)) { + offset = offsetof(struct HDIF_ms_area_address_range, mirror_start); + if (be32_to_cpu(arr->eactsz) < offset) { prerror("MS VPD: %p msarea #%i arange size too small!\n", ms_vpd, i); return; @@ -392,6 +514,10 @@ static void get_msareas(struct dt_node *root, /* This offset is from the arr, not the header! */ arange = (void *)arr + be32_to_cpu(arr->offset); for (j = 0; j < be32_to_cpu(arr->ecnt); j++) { + offset = offsetof(struct HDIF_ms_area_address_range, controller_id); + if (be32_to_cpu(arr->eactsz) >= offset) + add_memory_controller(msarea, arange); + if (!add_address_range(root, id, arange)) return; arange = (void *)arange + be32_to_cpu(arr->esize); diff --git a/hdata/spira.c b/hdata/spira.c index adaa604..a13f38e 100644 --- a/hdata/spira.c +++ b/hdata/spira.c @@ -1581,12 +1581,12 @@ int parse_hdat(bool is_opal) /* IPL params */ add_iplparams(); - /* Parse MS VPD */ - memory_parse(); - /* Add XSCOM node (must be before chiptod, IO and FSP) */ add_xscom(); + /* Parse MS VPD */ + memory_parse(); + /* Add any FSPs */ fsp_parse(); diff --git a/hdata/test/p8-840-spira.dts b/hdata/test/p8-840-spira.dts index a384434..5ba5149 100644 --- a/hdata/test/p8-840-spira.dts +++ b/hdata/test/p8-840-spira.dts @@ -568,7 +568,7 @@ }; memory@0 { - phandle = <0x41>; + phandle = <0x45>; device_type = "memory"; ibm,chip-id = <0x0>; reg = <0x0 0x0 0x8 0x0>; @@ -864,7 +864,7 @@ }; xscom@3fc0000000000 { - phandle = <0x42>; + phandle = <0x41>; ibm,chip-id = <0x0>; ibm,proc-chip-id = <0x0>; #address-cells = <0x1>; @@ -917,7 +917,7 @@ }; psihb@2010900 { - phandle = <0x43>; + phandle = <0x42>; reg = <0x2010900 0x20>; compatible = "ibm,power8-psihb-x", "ibm,psihb-x"; boot-link; @@ -926,7 +926,7 @@ }; xscom@3fc0800000000 { - phandle = <0x44>; + phandle = <0x43>; ibm,chip-id = <0x1>; ibm,proc-chip-id = <0x1>; #address-cells = <0x1>; @@ -979,7 +979,7 @@ }; psihb@2010900 { - phandle = <0x45>; + phandle = <0x44>; reg = <0x2010900 0x20>; compatible = "ibm,power8-psihb-x", "ibm,psihb-x"; }; diff --git a/hdata/test/p81-811.spira.dts b/hdata/test/p81-811.spira.dts index c0976e6..43891bb 100644 --- a/hdata/test/p81-811.spira.dts +++ b/hdata/test/p81-811.spira.dts @@ -1660,14 +1660,14 @@ }; memory@0 { - phandle = <0x81>; + phandle = <0x89>; device_type = "memory"; ibm,chip-id = <0x0>; reg = <0x0 0x0 0x10 0x0>; }; memory@1000000000 { - phandle = <0x82>; + phandle = <0x8a>; device_type = "memory"; ibm,chip-id = <0x10>; reg = <0x10 0x0 0x10 0x0>; @@ -2059,7 +2059,7 @@ }; xscom@3fc0000000000 { - phandle = <0x83>; + phandle = <0x81>; ibm,chip-id = <0x0>; ibm,proc-chip-id = <0x0>; #address-cells = <0x1>; @@ -2112,7 +2112,7 @@ }; psihb@2010900 { - phandle = <0x84>; + phandle = <0x82>; reg = <0x2010900 0x20>; compatible = "ibm,power8-psihb-x", "ibm,psihb-x"; boot-link; @@ -2121,7 +2121,7 @@ }; xscom@3fc0800000000 { - phandle = <0x85>; + phandle = <0x83>; ibm,chip-id = <0x1>; ibm,proc-chip-id = <0x1>; #address-cells = <0x1>; @@ -2173,14 +2173,14 @@ }; psihb@2010900 { - phandle = <0x86>; + phandle = <0x84>; reg = <0x2010900 0x20>; compatible = "ibm,power8-psihb-x", "ibm,psihb-x"; }; }; xscom@3fc8000000000 { - phandle = <0x87>; + phandle = <0x85>; ibm,chip-id = <0x10>; ibm,proc-chip-id = <0x2>; #address-cells = <0x1>; @@ -2222,7 +2222,7 @@ }; psihb@2010900 { - phandle = <0x88>; + phandle = <0x86>; reg = <0x2010900 0x20>; compatible = "ibm,power8-psihb-x", "ibm,psihb-x"; status = "ok"; @@ -2230,7 +2230,7 @@ }; xscom@3fc8800000000 { - phandle = <0x89>; + phandle = <0x87>; ibm,chip-id = <0x11>; ibm,proc-chip-id = <0x3>; #address-cells = <0x1>; @@ -2282,7 +2282,7 @@ }; psihb@2010900 { - phandle = <0x8a>; + phandle = <0x88>; reg = <0x2010900 0x20>; compatible = "ibm,power8-psihb-x", "ibm,psihb-x"; }; From patchwork Fri Oct 13 11:21:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasant Hegde X-Patchwork-Id: 825415 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yD50g1lcGz9s82 for ; Fri, 13 Oct 2017 22:22:27 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yD50d6m0DzDrCQ for ; Fri, 13 Oct 2017 22:22:25 +1100 (AEDT) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=hegdevasant@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yD50D50dhzDqlv for ; Fri, 13 Oct 2017 22:22:04 +1100 (AEDT) Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9DBKtgZ003063 for ; Fri, 13 Oct 2017 07:22:02 -0400 Received: from e06smtp14.uk.ibm.com (e06smtp14.uk.ibm.com [195.75.94.110]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dju6jw49t-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 13 Oct 2017 07:22:01 -0400 Received: from localhost by e06smtp14.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 13 Oct 2017 12:21:59 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp14.uk.ibm.com (192.168.101.144) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 13 Oct 2017 12:21:58 +0100 Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9DBLt1I28967074 for ; Fri, 13 Oct 2017 11:21:57 GMT Received: from d23av01.au.ibm.com (localhost [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v9DBLuZ8018690 for ; Fri, 13 Oct 2017 22:21:56 +1100 Received: from hegdevasant.in.ibm.com ([9.199.177.209]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id v9DBLq0H018557; Fri, 13 Oct 2017 22:21:54 +1100 From: Vasant Hegde To: skiboot@lists.ozlabs.org Date: Fri, 13 Oct 2017 16:51:19 +0530 X-Mailer: git-send-email 2.9.3 In-Reply-To: <20171013112119.30164-1-hegdevasant@linux.vnet.ibm.com> References: <20171013112119.30164-1-hegdevasant@linux.vnet.ibm.com> X-TM-AS-MML: disable x-cbid: 17101311-0016-0000-0000-000004F57D0C X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101311-0017-0000-0000-000028309B5E Message-Id: <20171013112119.30164-2-hegdevasant@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-13_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=1 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710130159 Subject: [Skiboot] [PATCH v2 2/2] hdata: Parse SPD data X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" Parse SPD data and populate device tree. list of properites parsing from SPD: ----------------------------------- [root@ltc-wspoon dimm@d00f]# lsprop . memory-id 0000000c (12) <-- DIMM type product-version 00000032 (50) <-- Module Revision Code device_type "memory-dimm-ddr4" serial-number 15d9acb6 (366587062) status "okay" size 00004000 (16384) phandle 000000bd (189) ibm,loc-code "UOPWR.0000000-Node0-DIMM7" part-number "36ASF2G72PZ-2G6B2 " reg 0000d007 (53255) name "dimm" manufacturer-id 0000802c (32812) <-- Vendor ID, we can get vendor name from this ID Also update documentation. Signed-off-by: Vasant Hegde --- Changes in v2: - Added few more properties - Updated documentation -Vasant doc/device-tree/memory-hierarchy.rst | 24 +++++++++++++++++ hdata/memory.c | 50 +++++++++++++++++++++++++++++++++++- 2 files changed, 73 insertions(+), 1 deletion(-) create mode 100644 doc/device-tree/memory-hierarchy.rst diff --git a/doc/device-tree/memory-hierarchy.rst b/doc/device-tree/memory-hierarchy.rst new file mode 100644 index 0000000..1da0c54 --- /dev/null +++ b/doc/device-tree/memory-hierarchy.rst @@ -0,0 +1,24 @@ +P9 memory hierarchy +------------------- +P9 Nimbus supports direct attached DDR memory through 4 DDR ports per side +of the processor. Device tree contains memory hierarchy so that one can +traverse from chip to DIMM like below: + + xscom@/mcbist@/mcs@/mca@/dimm@ + +Example of dimm node: +.. code-block:: dts + + dimm@d00e { + memory-id = <0xc>; /* DRAM Device Type. 0xc = DDR4 */ + product-version = <0x32>; /* Module Revision Code */ + device_type = "memory-dimm-ddr4"; + serial-number = <0x15d9ad1c>; + status = "okay"; + size = <0x4000>; + phandle = <0xd2>; + ibm,loc-code = "UOPWR.0000000-Node0-DIMM14"; + part-number = "36ASF2G72PZ-2G6B2 "; + reg = <0xd00e>; + manufacturer-id = <0x802c>; /* Vendor ID, we can get vendor name from this ID */ + }; diff --git a/hdata/memory.c b/hdata/memory.c index 74eedff..27dc559 100644 --- a/hdata/memory.c +++ b/hdata/memory.c @@ -319,16 +319,56 @@ static void vpd_add_ram_area(const struct HDIF_common_hdr *msarea) } } +static void vpd_parse_spd(struct dt_node *dimm, const char *spd, u32 size) +{ + u16 *vendor; + u32 *sn; + + /* SPD is too small */ + if (size < 512) { + prlog(PR_WARNING, "MSVPD: Invalid SPD size. " + "Expected 512 bytes, got %d\n", size); + return; + } + + /* Supports DDR4 format pasing only */ + if (spd[0x2] < 0xc) { + prlog(PR_WARNING, + "MSVPD: SPD format (%x) not supported\n", spd[0x2]); + return; + } + + dt_add_property_string(dimm, "device_type", "memory-dimm-ddr4"); + + /* DRAM device type */ + dt_add_property_cells(dimm, "memory-id", spd[0x2]); + + /* Module revision code */ + dt_add_property_cells(dimm, "product-version", spd[0x15d]); + + /* Serial number */ + sn = (u32 *)&spd[0x145]; + dt_add_property_cells(dimm, "serial-number", be32_to_cpu(*sn)); + + /* Part number */ + dt_add_property_nstr(dimm, "part-number", &spd[0x149], 20); + + /* Module manufacturer ID */ + vendor = (u16 *)&spd[0x140]; + dt_add_property_cells(dimm, "manufacturer-id", be16_to_cpu(*vendor)); +} + static void add_mca_dimm_info(struct dt_node *mca, const struct HDIF_common_hdr *msarea) { - unsigned int i; + unsigned int i, size; const struct HDIF_child_ptr *ramptr; const struct HDIF_common_hdr *ramarea; const struct spira_fru_id *fru_id; const struct HDIF_ram_area_id *ram_id; const struct HDIF_ram_area_size *ram_area_sz; struct dt_node *dimm; + const void *vpd_blob; ramptr = HDIF_child_arr(msarea, 0); if (!CHECK_SPPTR(ramptr)) { @@ -373,6 +413,14 @@ static void add_mca_dimm_info(struct dt_node *mca, dt_add_property_string(dimm, "status", "okay"); else dt_add_property_string(dimm, "status", "disabled"); + + vpd_blob = HDIF_get_idata(ramarea, 1, &size); + if (!CHECK_SPPTR(vpd_blob)) + continue; + if (vpd_valid(vpd_blob, size)) + vpd_data_parse(dimm, vpd_blob, size); + else + vpd_parse_spd(dimm, vpd_blob, size); } }