diff mbox series

[SRU,Bionic,1/1] UBUNTU: SAUCE: powerpc/perf: Fix memory allocation for core-imc based on num_possible_cpus()

Message ID 9deeb777fa9cf5e362bcc927134fcae14ed7be5f.1526654420.git.joseph.salisbury@canonical.com
State New
Headers show
Series UBUNTU: SAUCE: powerpc/perf: Fix memory allocation for core-imc based on num_possible_cpus() | expand

Commit Message

Joseph Salisbury May 18, 2018, 2:48 p.m. UTC
From: Anju T Sudhakar <anju at linux.vnet.ibm.com>

BugLink: http://bugs.launchpad.net/bugs/1770849

Currently memory is allocated for core-imc based on cpu_present_mask, which
 has bit 'cpu' set iff cpu is populated. We use  (cpu number / threads per core)
 as as array index to access the memory.  So in a system with guarded cores,
 since allocation happens based on cpu_present_mask, (cpu number / threads per core)
 bounds the index and leads to memory overflow.

The issue is exposed in a guard test.  The guard test will make some CPU's
 as un-available to the system during boot time as well as at runtime. So
 when the cpu is unavailable to the system during boot time, the memory
 allocation happens depending on the number of available cpus. And when we
 access the memory using (cpu number / threads per core) as the index the
 system crashes due to memory overflow.

Allocating memory for core-imc based on cpu_possible_mask, which has bit
 'cpu' set iff cpu is populatable, will fix this issue.

Reported-by: Pridhiviraj Paidipeddi <ppaidipe at linux.vnet.ibm.com>
Signed-off-by: Anju T Sudhakar <anju at linux.vnet.ibm.com>
Signed-off-by: Joseph Salisbury <joseph.salisbury@canonical.com>
---
 arch/powerpc/perf/imc-pmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Colin Ian King May 18, 2018, 3:14 p.m. UTC | #1
On 18/05/18 15:48, Joseph Salisbury wrote:
> From: Anju T Sudhakar <anju at linux.vnet.ibm.com>
> 
> BugLink: http://bugs.launchpad.net/bugs/1770849
> 
> Currently memory is allocated for core-imc based on cpu_present_mask, which
>  has bit 'cpu' set iff cpu is populated. We use  (cpu number / threads per core)
>  as as array index to access the memory.  So in a system with guarded cores,
>  since allocation happens based on cpu_present_mask, (cpu number / threads per core)
>  bounds the index and leads to memory overflow.
> 
> The issue is exposed in a guard test.  The guard test will make some CPU's
>  as un-available to the system during boot time as well as at runtime. So
>  when the cpu is unavailable to the system during boot time, the memory
>  allocation happens depending on the number of available cpus. And when we
>  access the memory using (cpu number / threads per core) as the index the
>  system crashes due to memory overflow.
> 
> Allocating memory for core-imc based on cpu_possible_mask, which has bit
>  'cpu' set iff cpu is populatable, will fix this issue.
> 
> Reported-by: Pridhiviraj Paidipeddi <ppaidipe at linux.vnet.ibm.com>
> Signed-off-by: Anju T Sudhakar <anju at linux.vnet.ibm.com>
> Signed-off-by: Joseph Salisbury <joseph.salisbury@canonical.com>
> ---
>  arch/powerpc/perf/imc-pmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
> index be4e7f8..dfb5864 100644
> --- a/arch/powerpc/perf/imc-pmu.c
> +++ b/arch/powerpc/perf/imc-pmu.c
> @@ -1132,7 +1132,7 @@ static int init_nest_pmu_ref(void)
>  
>  static void cleanup_all_core_imc_memory(void)
>  {
> -	int i, nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core);
> +	int i, nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
>  	struct imc_mem_info *ptr = core_imc_pmu->mem_info;
>  	int size = core_imc_pmu->counter_mem_size;
>  
> @@ -1248,7 +1248,7 @@ static int imc_mem_init(struct imc_pmu *pmu_ptr, struct device_node *parent,
>  		if (!pmu_ptr->pmu.name)
>  			return -ENOMEM;
>  
> -		nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core);
> +		nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
>  		pmu_ptr->mem_info = kcalloc(nr_cores, sizeof(struct imc_mem_info),
>  								GFP_KERNEL);
>  
> 
Fixes the issue, positive test results, limited to specific arch so
regression potential is also limited.  Seems like a sensible fix to me.

Acked-by: Colin Ian King <colin.king@canonical.com>
Seth Forshee May 22, 2018, 2:18 p.m. UTC | #2
On Fri, May 18, 2018 at 10:48:03AM -0400, Joseph Salisbury wrote:
> From: Anju T Sudhakar <anju at linux.vnet.ibm.com>
> 
> BugLink: http://bugs.launchpad.net/bugs/1770849
> 
> Currently memory is allocated for core-imc based on cpu_present_mask, which
>  has bit 'cpu' set iff cpu is populated. We use  (cpu number / threads per core)
>  as as array index to access the memory.  So in a system with guarded cores,
>  since allocation happens based on cpu_present_mask, (cpu number / threads per core)
>  bounds the index and leads to memory overflow.
> 
> The issue is exposed in a guard test.  The guard test will make some CPU's
>  as un-available to the system during boot time as well as at runtime. So
>  when the cpu is unavailable to the system during boot time, the memory
>  allocation happens depending on the number of available cpus. And when we
>  access the memory using (cpu number / threads per core) as the index the
>  system crashes due to memory overflow.
> 
> Allocating memory for core-imc based on cpu_possible_mask, which has bit
>  'cpu' set iff cpu is populatable, will fix this issue.
> 
> Reported-by: Pridhiviraj Paidipeddi <ppaidipe at linux.vnet.ibm.com>
> Signed-off-by: Anju T Sudhakar <anju at linux.vnet.ibm.com>
> Signed-off-by: Joseph Salisbury <joseph.salisbury@canonical.com>

Acked-by: Seth Forshee <seth.forshee@canonical.com>

Applied to unstable/master, thanks!
Stefan Bader May 23, 2018, 8:42 a.m. UTC | #3
Applied to bionic master-next.

-Stefan
diff mbox series

Patch

diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
index be4e7f8..dfb5864 100644
--- a/arch/powerpc/perf/imc-pmu.c
+++ b/arch/powerpc/perf/imc-pmu.c
@@ -1132,7 +1132,7 @@  static int init_nest_pmu_ref(void)
 
 static void cleanup_all_core_imc_memory(void)
 {
-	int i, nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core);
+	int i, nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
 	struct imc_mem_info *ptr = core_imc_pmu->mem_info;
 	int size = core_imc_pmu->counter_mem_size;
 
@@ -1248,7 +1248,7 @@  static int imc_mem_init(struct imc_pmu *pmu_ptr, struct device_node *parent,
 		if (!pmu_ptr->pmu.name)
 			return -ENOMEM;
 
-		nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core);
+		nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
 		pmu_ptr->mem_info = kcalloc(nr_cores, sizeof(struct imc_mem_info),
 								GFP_KERNEL);