From patchwork Tue Jun 19 15:20:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Jingqi" X-Patchwork-Id: 931658 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 419BYH4XYxz9s7M for ; Wed, 20 Jun 2018 01:22:59 +1000 (AEST) Received: from localhost ([::1]:43231 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fVITA-0002Y5-Jj for incoming@patchwork.ozlabs.org; Tue, 19 Jun 2018 11:22:56 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57328) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fVISa-0002VH-Aw for qemu-devel@nongnu.org; Tue, 19 Jun 2018 11:22:21 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fVISV-0004L2-Kf for qemu-devel@nongnu.org; Tue, 19 Jun 2018 11:22:20 -0400 Received: from mga17.intel.com ([192.55.52.151]:5803) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fVISV-0004IO-BC for qemu-devel@nongnu.org; Tue, 19 Jun 2018 11:22:15 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2018 08:22:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,243,1526367600"; d="scan'208";a="50197440" Received: from optiplex-7050.sh.intel.com ([10.239.161.26]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2018 08:22:12 -0700 From: Liu Jingqi To: imammedo@redhat.com, ehabkost@redhat.com, eblake@redhat.com, pbonzini@redhat.com, mst@redhat.com, marcel.apfelbaum@gmail.com, rth@twiddle.net, armbru@redhat.com Date: Tue, 19 Jun 2018 23:20:54 +0800 Message-Id: <1529421657-14969-4-git-send-email-jingqi.liu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1529421657-14969-1-git-send-email-jingqi.liu@intel.com> References: <1529421657-14969-1-git-send-email-jingqi.liu@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.151 Subject: [Qemu-devel] [PATCH V1 RESEND 3/6] hmat acpi: Build Memory Side Cache Information Structure(s) in ACPI HMAT X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Liu Jingqi , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" This structure describes memory side cache information for memory proximity domains if the memory side cache is present and the physical device(SMBIOS handle) forms the memory side cache. The software could use this information to effectively place the data in memory to maximize the performance of the system memory that use the memory side cache. Signed-off-by: Liu Jingqi --- hw/acpi/hmat.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ hw/acpi/hmat.h | 44 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 100 insertions(+) diff --git a/hw/acpi/hmat.c b/hw/acpi/hmat.c index 214f150..9d29ef7 100644 --- a/hw/acpi/hmat.c +++ b/hw/acpi/hmat.c @@ -35,6 +35,8 @@ #include "hw/acpi/bios-linker-loader.h" struct numa_hmat_lb_info *hmat_lb_info[HMAT_LB_LEVELS][HMAT_LB_TYPES] = {0}; +struct numa_hmat_cache_info + *hmat_cache_info[MAX_NODES][MAX_HMAT_CACHE_LEVEL + 1] = {0}; static uint32_t initiator_pxm[MAX_NODES], target_pxm[MAX_NODES]; static uint32_t num_initiator, num_target; @@ -210,6 +212,57 @@ static void hmat_build_lb(GArray *table_data) } } +static void hmat_build_cache(GArray *table_data) +{ + AcpiHmatCacheInfo *hmat_cache; + struct numa_hmat_cache_info *numa_hmat_cache; + int i, level; + + for (i = 0; i < nb_numa_nodes; i++) { + for (level = 0; level <= MAX_HMAT_CACHE_LEVEL; level++) { + numa_hmat_cache = hmat_cache_info[i][level]; + if (numa_hmat_cache) { + uint64_t start = table_data->len; + + hmat_cache = acpi_data_push(table_data, sizeof(*hmat_cache)); + hmat_cache->length = cpu_to_le32(sizeof(*hmat_cache)); + hmat_cache->type = cpu_to_le16(ACPI_HMAT_CACHE_INFO); + hmat_cache->mem_proximity = + cpu_to_le32(numa_hmat_cache->mem_proximity); + hmat_cache->cache_size = cpu_to_le64(numa_hmat_cache->size); + hmat_cache->cache_attr = HMAT_CACHE_TOTAL_LEVEL( + numa_hmat_cache->total_levels); + hmat_cache->cache_attr |= HMAT_CACHE_CURRENT_LEVEL( + numa_hmat_cache->level); + hmat_cache->cache_attr |= HMAT_CACHE_ASSOC( + numa_hmat_cache->associativity); + hmat_cache->cache_attr |= HMAT_CACHE_WRITE_POLICY( + numa_hmat_cache->write_policy); + hmat_cache->cache_attr |= HMAT_CACHE_LINE_SIZE( + numa_hmat_cache->line_size); + hmat_cache->cache_attr = cpu_to_le32(hmat_cache->cache_attr); + + if (numa_hmat_cache->num_smbios_handles != 0) { + uint16_t *smbios_handles; + int size; + + size = hmat_cache->num_smbios_handles * sizeof(uint16_t); + smbios_handles = acpi_data_push(table_data, size); + + hmat_cache = (AcpiHmatCacheInfo *) + (table_data->data + start); + hmat_cache->length += size; + + /* TBD: set smbios handles */ + memset(smbios_handles, 0, size); + } + hmat_cache->num_smbios_handles = + cpu_to_le16(numa_hmat_cache->num_smbios_handles); + } + } + } +} + static void hmat_build_hma(GArray *hma, PCMachineState *pcms) { /* Build HMAT Memory Subsystem Address Range. */ @@ -217,6 +270,9 @@ static void hmat_build_hma(GArray *hma, PCMachineState *pcms) /* Build HMAT System Locality Latency and Bandwidth Information. */ hmat_build_lb(hma); + + /* Build HMAT Memory Side Cache Information. */ + hmat_build_cache(hma); } void hmat_build_acpi(GArray *table_data, BIOSLinker *linker, diff --git a/hw/acpi/hmat.h b/hw/acpi/hmat.h index fddd05e..f9fdcdc 100644 --- a/hw/acpi/hmat.h +++ b/hw/acpi/hmat.h @@ -33,6 +33,15 @@ #define ACPI_HMAT_SPA 0 #define ACPI_HMAT_LB_INFO 1 +#define ACPI_HMAT_CACHE_INFO 2 + +#define MAX_HMAT_CACHE_LEVEL 3 + +#define HMAT_CACHE_TOTAL_LEVEL(level) (level & 0xF) +#define HMAT_CACHE_CURRENT_LEVEL(level) ((level & 0xF) << 4) +#define HMAT_CACHE_ASSOC(assoc) ((assoc & 0xF) << 8) +#define HMAT_CACHE_WRITE_POLICY(policy) ((policy & 0xF) << 12) +#define HMAT_CACHE_LINE_SIZE(size) ((size & 0xFFFF) << 16) /* ACPI HMAT sub-structure header */ #define ACPI_HMAT_SUB_HEADER_DEF \ @@ -102,6 +111,17 @@ struct AcpiHmatLBInfo { } QEMU_PACKED; typedef struct AcpiHmatLBInfo AcpiHmatLBInfo; +struct AcpiHmatCacheInfo { + ACPI_HMAT_SUB_HEADER_DEF + uint32_t mem_proximity; + uint32_t reserved; + uint64_t cache_size; + uint32_t cache_attr; + uint16_t reserved2; + uint16_t num_smbios_handles; +} QEMU_PACKED; +typedef struct AcpiHmatCacheInfo AcpiHmatCacheInfo; + struct numa_hmat_lb_info { /* * Indicates total number of Proximity Domains @@ -141,7 +161,31 @@ struct numa_hmat_lb_info { uint16_t bandwidth[MAX_NODES][MAX_NODES]; }; +struct numa_hmat_cache_info { + /* The memory proximity domain to which the memory belongs. */ + uint32_t mem_proximity; + /* Size of memory side cache in bytes. */ + uint64_t size; + /* Total cache levels for this memory proximity domain. */ + uint8_t total_levels; + /* Cache level described in this structure. */ + uint8_t level; + /* Cache Associativity: None/Direct Mapped/Comple Cache Indexing */ + uint8_t associativity; + /* Write Policy: None/Write Back(WB)/Write Through(WT) */ + uint8_t write_policy; + /* Cache Line size in bytes. */ + uint16_t line_size; + /* + * Number of SMBIOS handles that contributes to + * the memory side cache physical devices. + */ + uint16_t num_smbios_handles; +}; + extern struct numa_hmat_lb_info *hmat_lb_info[HMAT_LB_LEVELS][HMAT_LB_TYPES]; +extern struct numa_hmat_cache_info + *hmat_cache_info[MAX_NODES][MAX_HMAT_CACHE_LEVEL + 1]; void hmat_build_acpi(GArray *table_data, BIOSLinker *linker, MachineState *machine);