[1/2] npu2: Rework phb-index assignments for virtual PHBs
diff mbox series

Message ID 20190801085500.12505-1-fbarrat@linux.ibm.com
State New
Headers show
Series
  • [1/2] npu2: Rework phb-index assignments for virtual PHBs
Related show

Checks

Context Check Description
snowpatch_ozlabs/snowpatch_job_snowpatch-skiboot-dco success Signed-off-by present
snowpatch_ozlabs/snowpatch_job_snowpatch-skiboot success Test snowpatch/job/snowpatch-skiboot on branch master
snowpatch_ozlabs/apply_patch success Successfully applied on branch master (2554cac82da530acfcb1a575c571e760de92dde4)

Commit Message

Frederic Barrat Aug. 1, 2019, 8:54 a.m. UTC
Until now, opencapi PHBs were not using the 'ibm,phb-index' property,
as it was thought unnecessary. For nvlink, a phb-index was associated
to the npu when parsing hdat data, and the nvlink PHB was reusing the
same value.

It turns out it helps to have the 'ibm,phb-index' property for
opencapi PHBs after all. Otherwise it can lead to wrong results on
platforms like mihawk when trying to match entries in the slot
table. We end up with an opencapi device inheriting wrong properties
in the device tree, because match_slot_phb_entry() default to
phb-index 0 if it cannot find the property. Though it doesn't seem to
cause any harm, it's wrong and a future patch is expected to start
using the slot table for opencapi, so it needs fixing.

The twist is that with opencapi, we can have multiple virtual PHBs for
a single NPU, at least on P9. Therefore there's no 1-to-1 mapping
between the NPU and PHB index. It doesn't really make sense to
associate a phb-index to a npu.

With this patch, opencapi PHBs created under a NPU use a fixed mapping
for their phb-index, based on the brick index. The range of possible
values is 7 to 12. Because there can only be one nvlink PHB per NPU,
it is always using a phb-index of 7.

A side effect is that 2 virtual PHBs on 2 different chips can have the
same phb-index, which is similar to what happens for 'real' PCI PHBs,
but is different from what was happening on a nvlink-only witherspoon
so far.

This doesn't really affect Axone, where we have a 1-to-1 mapping
between NPU and PHB for nvlink (and it will remain true with
opencapi), but to be consistent, we define the phb-index when creating
the PHB like on P9.

Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
---

Reza, Alistair: see the blurb above about the phb-index of the nvlink
PHBs changing from 7 and 8 on witherspoon, to 7 and 7. My
understanding is that 1) it's not really used on witherspoon and 2)
when it is used, the phb-index is associated to the chip ID, as it's
needed for real PHBs anyway, but I thought I should highlight it.


 hw/npu2-common.c   | 1 -
 hw/npu2-opencapi.c | 2 ++
 hw/npu2.c          | 2 +-
 hw/npu3-nvlink.c   | 2 +-
 include/npu2.h     | 2 +-
 include/npu3.h     | 2 ++
 6 files changed, 7 insertions(+), 4 deletions(-)

Comments

Andrew Donnellan Aug. 2, 2019, 5:27 a.m. UTC | #1
On 1/8/19 6:54 pm, Frederic Barrat wrote:
> Until now, opencapi PHBs were not using the 'ibm,phb-index' property,
> as it was thought unnecessary. For nvlink, a phb-index was associated
> to the npu when parsing hdat data, and the nvlink PHB was reusing the
> same value.
> 
> It turns out it helps to have the 'ibm,phb-index' property for
> opencapi PHBs after all. Otherwise it can lead to wrong results on
> platforms like mihawk when trying to match entries in the slot
> table. We end up with an opencapi device inheriting wrong properties
> in the device tree, because match_slot_phb_entry() default to
> phb-index 0 if it cannot find the property. Though it doesn't seem to
> cause any harm, it's wrong and a future patch is expected to start
> using the slot table for opencapi, so it needs fixing.
> 
> The twist is that with opencapi, we can have multiple virtual PHBs for
> a single NPU, at least on P9. Therefore there's no 1-to-1 mapping
> between the NPU and PHB index. It doesn't really make sense to
> associate a phb-index to a npu.
> 
> With this patch, opencapi PHBs created under a NPU use a fixed mapping
> for their phb-index, based on the brick index. The range of possible
> values is 7 to 12. Because there can only be one nvlink PHB per NPU,
> it is always using a phb-index of 7.
> 
> A side effect is that 2 virtual PHBs on 2 different chips can have the
> same phb-index, which is similar to what happens for 'real' PCI PHBs,
> but is different from what was happening on a nvlink-only witherspoon
> so far.
> 
> This doesn't really affect Axone, where we have a 1-to-1 mapping
> between NPU and PHB for nvlink (and it will remain true with
> opencapi), but to be consistent, we define the phb-index when creating
> the PHB like on P9.
> 
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>

Maybe include a comment explaining this in the code?

Otherwise

Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com>

> ---
> 
> Reza, Alistair: see the blurb above about the phb-index of the nvlink
> PHBs changing from 7 and 8 on witherspoon, to 7 and 7. My
> understanding is that 1) it's not really used on witherspoon and 2)
> when it is used, the phb-index is associated to the chip ID, as it's
> needed for real PHBs anyway, but I thought I should highlight it.
> 
> 
>   hw/npu2-common.c   | 1 -
>   hw/npu2-opencapi.c | 2 ++
>   hw/npu2.c          | 2 +-
>   hw/npu3-nvlink.c   | 2 +-
>   include/npu2.h     | 2 +-
>   include/npu3.h     | 2 ++
>   6 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/npu2-common.c b/hw/npu2-common.c
> index 6d5c35af..7326e165 100644
> --- a/hw/npu2-common.c
> +++ b/hw/npu2-common.c
> @@ -530,7 +530,6 @@ static struct npu2 *setup_npu(struct dt_node *dn)
>   	npu->index = dt_prop_get_u32(dn, "ibm,npu-index");
>   	npu->chip_id = gcid;
>   	npu->xscom_base = dt_get_address(dn, 0, NULL);
> -	npu->phb_index = dt_prop_get_u32(dn, "ibm,phb-index");
>   
>   	init_lock(&npu->i2c_lock);
>   	npu->i2c_pin_mode = ~0; // input mode by default
> diff --git a/hw/npu2-opencapi.c b/hw/npu2-opencapi.c
> index 9a391bb0..106ce0ac 100644
> --- a/hw/npu2-opencapi.c
> +++ b/hw/npu2-opencapi.c
> @@ -1665,6 +1665,8 @@ static void setup_device(struct npu2_dev *dev)
>   	dt_add_property_strings(dn_phb, "device_type", "pciex");
>   	dt_add_property(dn_phb, "reg", mm_win, sizeof(mm_win));
>   	dt_add_property_cells(dn_phb, "ibm,npu-index", dev->npu->index);
> +	dt_add_property_cells(dn_phb, "ibm,phb-index",
> +			      NPU2_PHB_INDEX_BASE + dev->brick_index);
>   	dt_add_property_cells(dn_phb, "ibm,chip-id", dev->npu->chip_id);
>   	dt_add_property_cells(dn_phb, "ibm,xscom-base", dev->npu->xscom_base);
>   	dt_add_property_cells(dn_phb, "ibm,npcq", dev->npu->dt_node->phandle);
> diff --git a/hw/npu2.c b/hw/npu2.c
> index 29c998f6..19ca70a4 100644
> --- a/hw/npu2.c
> +++ b/hw/npu2.c
> @@ -1478,7 +1478,7 @@ int npu2_nvlink_init_npu(struct npu2 *npu)
>   				"ibm,ioda2-npu2-phb");
>   	dt_add_property_strings(np, "device_type", "pciex");
>   	dt_add_property(np, "reg", reg, sizeof(reg));
> -	dt_add_property_cells(np, "ibm,phb-index", npu->phb_index);
> +	dt_add_property_cells(np, "ibm,phb-index", NPU2_PHB_INDEX_BASE);
>   	dt_add_property_cells(np, "ibm,npu-index", npu->index);
>   	dt_add_property_cells(np, "ibm,chip-id", npu->chip_id);
>   	dt_add_property_cells(np, "ibm,xscom-base", npu->xscom_base);
> diff --git a/hw/npu3-nvlink.c b/hw/npu3-nvlink.c
> index 7e7a10e8..a4e633e6 100644
> --- a/hw/npu3-nvlink.c
> +++ b/hw/npu3-nvlink.c
> @@ -1461,7 +1461,7 @@ static void npu3_dt_add_props(struct npu3 *npu)
>   				"ibm,ioda2-npu2-phb");
>   
>   	dt_add_property_cells(dn, "ibm,phb-index",
> -			      dt_prop_get_u32(npu->dt_node, "ibm,phb-index"));
> +			      NPU3_PHB_INDEX_BASE + npu->index);
>   	dt_add_property_cells(dn, "ibm,phb-diag-data-size", 0);
>   	dt_add_property_cells(dn, "ibm,opal-num-pes", NPU3_MAX_PE_NUM);
>   	dt_add_property_cells(dn, "ibm,opal-reserved-pe", NPU3_RESERVED_PE_NUM);
> diff --git a/include/npu2.h b/include/npu2.h
> index 92b58988..47317c8c 100644
> --- a/include/npu2.h
> +++ b/include/npu2.h
> @@ -53,6 +53,7 @@
>   #define NPU2_OCAPI_PE(ndev) ((ndev)->brick_index + NPU2_MAX_PE_NUM)
>   
>   #define NPU2_LINKS_PER_CHIP 6
> +#define NPU2_PHB_INDEX_BASE 7 // enough to avoid conflicts with PCI
>   
>   /* Link flags */
>   #define NPU2_DEV_PCI_LINKED	0x1
> @@ -175,7 +176,6 @@ struct npu2 {
>   
>   	/* NVLink */
>   	struct phb	phb_nvlink;
> -	uint32_t	phb_index;
>   
>   	/* OCAPI */
>   	uint64_t	i2c_port_id_ocapi;
> diff --git a/include/npu3.h b/include/npu3.h
> index 1c657f94..417756c4 100644
> --- a/include/npu3.h
> +++ b/include/npu3.h
> @@ -78,6 +78,8 @@ struct npu3_dev {
>   	struct npu3_dev_nvlink	nvlink;
>   };
>   
> +#define NPU3_PHB_INDEX_BASE 7 // enough to avoid conflicts with PCI
> +
>   struct npu3_nvlink {
>   	struct phb		phb;
>   	uint32_t		ctx_ref[NPU3_XTS_BDF_MAP_MAX];
>
Oliver O'Halloran Aug. 2, 2019, 8:10 a.m. UTC | #2
On Thu, 2019-08-01 at 10:54 +0200, Frederic Barrat wrote:
> Until now, opencapi PHBs were not using the 'ibm,phb-index' property,
> as it was thought unnecessary. For nvlink, a phb-index was associated
> to the npu when parsing hdat data, and the nvlink PHB was reusing the
> same value.
> 
> It turns out it helps to have the 'ibm,phb-index' property for
> opencapi PHBs after all. Otherwise it can lead to wrong results on
> platforms like mihawk when trying to match entries in the slot
> table. We end up with an opencapi device inheriting wrong properties
> in the device tree, because match_slot_phb_entry() default to
> phb-index 0 if it cannot find the property. Though it doesn't seem to
> cause any harm, it's wrong and a future patch is expected to start
> using the slot table for opencapi, so it needs fixing.

Yeah, that sounds broken.

> The twist is that with opencapi, we can have multiple virtual PHBs for
> a single NPU, at least on P9. Therefore there's no 1-to-1 mapping
> between the NPU and PHB index. It doesn't really make sense to
> associate a phb-index to a npu.

This isn't all that different to how the actual PHBs work. The PCI
equivilent of an NPU would be a PEC (PowerBus<->PHB glue) and PECs can
support multiple PHBs behind them. For P9 the structure is:

PEC0:
	PHB0
PEC1:
	PHB1
	PHB2
PEC3:
	PHB3
	PHB4
	PHB5

We use the phb-index for normal PHBs to calculate what the OPAL PHB ID
(linux domain number) should be for that PHB. The virtual PHBs will
register the first number available rather than claiming a fixed number
based on the phb-index and I think this is a problem.


Currently we use fixed numbers for real PHBs based on this:

> #define PHB4_PER_CHIP 6 /* Max 6 PHBs per chipon p9 */
> 
> static inline int phb4_get_opal_id(unsigned int chip_id, unsigned int > index)
> {
>         return chip_id * PHB4_PER_CHIP + index;
> }

and:

> pci_register_phb(&p->phb, phb4_get_opal_id(p->chip_id, p->index));

While the virtual PHBs use:

pci_register_phb(&npu->phb_nvlink, OPAL_DYNAMIC_PHB_ID);

So the virtual PHBs take the first PHB ID that's free. On most systems
we use chip-as-node mode so the 2nd chip has chip ID 8 and the virtual
PHBs use PHB IDs 7 or 8.

Now, if we want to use fixed phb-index values for the virtual PHBs then
I'm all for it, but right now we don't have room to add a set of
virtual PHBs to each chip without changing the PCI domain number for
all the real PHBs. Changing the domain number also changes the systemd
persistent device naming of network adapters so I don't think we can
change any of this for currently GAed platforms, but it's something I'd
like to sort out for Axone.

If we're going to have a fixed phb-index for the virtual PHBs too, I
think you need to work out how that's going to map to a fixed ibm,opal-
phbid too.
Frederic Barrat Aug. 2, 2019, 1:37 p.m. UTC | #3
Le 02/08/2019 à 10:10, Oliver O'Halloran a écrit :
> On Thu, 2019-08-01 at 10:54 +0200, Frederic Barrat wrote:
>> Until now, opencapi PHBs were not using the 'ibm,phb-index' property,
>> as it was thought unnecessary. For nvlink, a phb-index was associated
>> to the npu when parsing hdat data, and the nvlink PHB was reusing the
>> same value.
>>
>> It turns out it helps to have the 'ibm,phb-index' property for
>> opencapi PHBs after all. Otherwise it can lead to wrong results on
>> platforms like mihawk when trying to match entries in the slot
>> table. We end up with an opencapi device inheriting wrong properties
>> in the device tree, because match_slot_phb_entry() default to
>> phb-index 0 if it cannot find the property. Though it doesn't seem to
>> cause any harm, it's wrong and a future patch is expected to start
>> using the slot table for opencapi, so it needs fixing.
> 
> Yeah, that sounds broken.
> 
>> The twist is that with opencapi, we can have multiple virtual PHBs for
>> a single NPU, at least on P9. Therefore there's no 1-to-1 mapping
>> between the NPU and PHB index. It doesn't really make sense to
>> associate a phb-index to a npu.
> 
> This isn't all that different to how the actual PHBs work. The PCI
> equivilent of an NPU would be a PEC (PowerBus<->PHB glue) and PECs can
> support multiple PHBs behind them. For P9 the structure is:
> 
> PEC0:
> 	PHB0
> PEC1:
> 	PHB1
> 	PHB2
> PEC3:
> 	PHB3
> 	PHB4
> 	PHB5


Yes, what we do for opencapi deviates from nvlink (1 PHB per link as 
opposed to 1 per NPU). That's unfortunate and confusing but 1) we had to 
because of hardware considerations related to BAR assignments and 2) the 
opencapi layout is the similar to real PCIs as you describe above, so we 
didn't feel too bad about it.


> We use the phb-index for normal PHBs to calculate what the OPAL PHB ID
> (linux domain number) should be for that PHB. The virtual PHBs will
> register the first number available rather than claiming a fixed number
> based on the phb-index and I think this is a problem.
> 
> 
> Currently we use fixed numbers for real PHBs based on this:
> 
>> #define PHB4_PER_CHIP 6 /* Max 6 PHBs per chipon p9 */
>>
>> static inline int phb4_get_opal_id(unsigned int chip_id, unsigned int > index)
>> {
>>          return chip_id * PHB4_PER_CHIP + index;
>> }
> 
> and:
> 
>> pci_register_phb(&p->phb, phb4_get_opal_id(p->chip_id, p->index));
> 
> While the virtual PHBs use:
> 
> pci_register_phb(&npu->phb_nvlink, OPAL_DYNAMIC_PHB_ID);
> 
> So the virtual PHBs take the first PHB ID that's free. On most systems
> we use chip-as-node mode so the 2nd chip has chip ID 8 and the virtual
> PHBs use PHB IDs 7 or 8.
> 
> Now, if we want to use fixed phb-index values for the virtual PHBs then
> I'm all for it, but right now we don't have room to add a set of
> virtual PHBs to each chip without changing the PCI domain number for
> all the real PHBs. Changing the domain number also changes the systemd
> persistent device naming of network adapters so I don't think we can
> change any of this for currently GAed platforms, but it's something I'd
> like to sort out for Axone.
> 
> If we're going to have a fixed phb-index for the virtual PHBs too, I
> think you need to work out how that's going to map to a fixed ibm,opal-
> phbid too.


I agree we should shoot for fixed phb-index and opal-phbid, as it makes 
things simpler. It's also a bit troubling to see those virtual PHBs with 
low opal IDs, when they are related to devices plugged to the 2nd chip.

For Axone, it seems doable: 6 PHBs + 3 NPUs per chip (I'm not aware of 
PCI changes, if any, for Axone. Is it still 3 PECs and 6 PHBs per 
chip?). In the recently merged npu3 patches, we have a 1-to-1 mapping 
between NPU and PHB for nvlink, and the upcoming opencapi patches will 
keep it that way.

For P9, it becomes ugly really fast if we want to preserve compatibility 
with current phb/domain IDs seen by userland (and we should). At the 
same time, while desirable, I don't think we really *have to* have fixed 
phbid either. On a given platform, the dynamic IDs end up always being 
the same anyway.
So if I do the right thing for Axone, but keep things as is (with this 
patch) on P9, would that fly?

   Fred
Oliver O'Halloran Aug. 5, 2019, 11:13 a.m. UTC | #4
On Fri, Aug 2, 2019 at 11:37 PM Frederic Barrat <fbarrat@linux.ibm.com> wrote:
>
> Le 02/08/2019 à 10:10, Oliver O'Halloran a écrit :
> > On Thu, 2019-08-01 at 10:54 +0200, Frederic Barrat wrote:
>
> Yes, what we do for opencapi deviates from nvlink (1 PHB per link as
> opposed to 1 per NPU). That's unfortunate and confusing but 1) we had to
> because of hardware considerations related to BAR assignments and 2) the
> opencapi layout is the similar to real PCIs as you describe above, so we
> didn't feel too bad about it.

Fair enough. The only situation I can see where having seperate PHBs
might be an issue is if we wanted to implement fence recovery. For
real PHBs we can unfence them individually even when they share a PEC,
but I dunno about the NPU. Hopefully we'll never need to do that
either.

> I agree we should shoot for fixed phb-index and opal-phbid, as it makes
> things simpler. It's also a bit troubling to see those virtual PHBs with
> low opal IDs, when they are related to devices plugged to the 2nd chip.
>
> For Axone, it seems doable: 6 PHBs + 3 NPUs per chip (I'm not aware of
> PCI changes, if any, for Axone. Is it still 3 PECs and 6 PHBs per
> chip?).

Yeah, looks like there's been no changes.

> In the recently merged npu3 patches, we have a 1-to-1 mapping
> between NPU and PHB for nvlink, and the upcoming opencapi patches will
> keep it that way.
>
> For P9, it becomes ugly really fast if we want to preserve compatibility
> with current phb/domain IDs seen by userland (and we should). At the
> same time, while desirable, I don't think we really *have to* have fixed
> phbid either. On a given platform, the dynamic IDs end up always being
> the same anyway.
>
> So if I do the right thing for Axone, but keep things as is (with this
> patch) on P9, would that fly?

That's fine. I figured we'd have to keep the existing behaviour for
already released systems. We could make the new numbering the default
for Axone and onwards and for unreleased P9 systems.

Anyway I was thinking we should use: <chip_id> * 0x10 as the base PHB
index for each chip so we can support 16 PHBs per chip. Would that be
enough?
Frederic Barrat Aug. 5, 2019, 5:26 p.m. UTC | #5
Le 05/08/2019 à 13:13, Oliver O'Halloran a écrit :
> On Fri, Aug 2, 2019 at 11:37 PM Frederic Barrat <fbarrat@linux.ibm.com> wrote:
>>
>> Le 02/08/2019 à 10:10, Oliver O'Halloran a écrit :
>>> On Thu, 2019-08-01 at 10:54 +0200, Frederic Barrat wrote:
>>
>> Yes, what we do for opencapi deviates from nvlink (1 PHB per link as
>> opposed to 1 per NPU). That's unfortunate and confusing but 1) we had to
>> because of hardware considerations related to BAR assignments and 2) the
>> opencapi layout is the similar to real PCIs as you describe above, so we
>> didn't feel too bad about it.
> 
> Fair enough. The only situation I can see where having seperate PHBs
> might be an issue is if we wanted to implement fence recovery. For
> real PHBs we can unfence them individually even when they share a PEC,
> but I dunno about the NPU. Hopefully we'll never need to do that
> either.


For the NPU, we can fence/unfence on a per-link (i.e brick) basis.


>> I agree we should shoot for fixed phb-index and opal-phbid, as it makes
>> things simpler. It's also a bit troubling to see those virtual PHBs with
>> low opal IDs, when they are related to devices plugged to the 2nd chip.
>>
>> For Axone, it seems doable: 6 PHBs + 3 NPUs per chip (I'm not aware of
>> PCI changes, if any, for Axone. Is it still 3 PECs and 6 PHBs per
>> chip?).
> 
> Yeah, looks like there's been no changes.
> 
>> In the recently merged npu3 patches, we have a 1-to-1 mapping
>> between NPU and PHB for nvlink, and the upcoming opencapi patches will
>> keep it that way.
>>
>> For P9, it becomes ugly really fast if we want to preserve compatibility
>> with current phb/domain IDs seen by userland (and we should). At the
>> same time, while desirable, I don't think we really *have to* have fixed
>> phbid either. On a given platform, the dynamic IDs end up always being
>> the same anyway.
>>
>> So if I do the right thing for Axone, but keep things as is (with this
>> patch) on P9, would that fly?
> 
> That's fine. I figured we'd have to keep the existing behaviour for
> already released systems. We could make the new numbering the default
> for Axone and onwards and for unreleased P9 systems.


Well, I'm not too keen on changing it on P9/npu2 just for unreleased 
systems. I'd favor keeping it simple and all P9 systems the same. If it 
has to work for the already GA'd systems, it will work for the newer 
ones too.


> Anyway I was thinking we should use: <chip_id> * 0x10 as the base PHB
> index for each chip so we can support 16 PHBs per chip. Would that be
> enough?


16 PHBs would cover axone easily. What comes next is still a bit fuzzy, 
at least to me, but I guess we can always revisit then.
Is the chip ID numbering the same on axone? Does it mean the 2nd chip 
would see PHBs starting at 0x80?

   Fred

Patch
diff mbox series

diff --git a/hw/npu2-common.c b/hw/npu2-common.c
index 6d5c35af..7326e165 100644
--- a/hw/npu2-common.c
+++ b/hw/npu2-common.c
@@ -530,7 +530,6 @@  static struct npu2 *setup_npu(struct dt_node *dn)
 	npu->index = dt_prop_get_u32(dn, "ibm,npu-index");
 	npu->chip_id = gcid;
 	npu->xscom_base = dt_get_address(dn, 0, NULL);
-	npu->phb_index = dt_prop_get_u32(dn, "ibm,phb-index");
 
 	init_lock(&npu->i2c_lock);
 	npu->i2c_pin_mode = ~0; // input mode by default
diff --git a/hw/npu2-opencapi.c b/hw/npu2-opencapi.c
index 9a391bb0..106ce0ac 100644
--- a/hw/npu2-opencapi.c
+++ b/hw/npu2-opencapi.c
@@ -1665,6 +1665,8 @@  static void setup_device(struct npu2_dev *dev)
 	dt_add_property_strings(dn_phb, "device_type", "pciex");
 	dt_add_property(dn_phb, "reg", mm_win, sizeof(mm_win));
 	dt_add_property_cells(dn_phb, "ibm,npu-index", dev->npu->index);
+	dt_add_property_cells(dn_phb, "ibm,phb-index",
+			      NPU2_PHB_INDEX_BASE + dev->brick_index);
 	dt_add_property_cells(dn_phb, "ibm,chip-id", dev->npu->chip_id);
 	dt_add_property_cells(dn_phb, "ibm,xscom-base", dev->npu->xscom_base);
 	dt_add_property_cells(dn_phb, "ibm,npcq", dev->npu->dt_node->phandle);
diff --git a/hw/npu2.c b/hw/npu2.c
index 29c998f6..19ca70a4 100644
--- a/hw/npu2.c
+++ b/hw/npu2.c
@@ -1478,7 +1478,7 @@  int npu2_nvlink_init_npu(struct npu2 *npu)
 				"ibm,ioda2-npu2-phb");
 	dt_add_property_strings(np, "device_type", "pciex");
 	dt_add_property(np, "reg", reg, sizeof(reg));
-	dt_add_property_cells(np, "ibm,phb-index", npu->phb_index);
+	dt_add_property_cells(np, "ibm,phb-index", NPU2_PHB_INDEX_BASE);
 	dt_add_property_cells(np, "ibm,npu-index", npu->index);
 	dt_add_property_cells(np, "ibm,chip-id", npu->chip_id);
 	dt_add_property_cells(np, "ibm,xscom-base", npu->xscom_base);
diff --git a/hw/npu3-nvlink.c b/hw/npu3-nvlink.c
index 7e7a10e8..a4e633e6 100644
--- a/hw/npu3-nvlink.c
+++ b/hw/npu3-nvlink.c
@@ -1461,7 +1461,7 @@  static void npu3_dt_add_props(struct npu3 *npu)
 				"ibm,ioda2-npu2-phb");
 
 	dt_add_property_cells(dn, "ibm,phb-index",
-			      dt_prop_get_u32(npu->dt_node, "ibm,phb-index"));
+			      NPU3_PHB_INDEX_BASE + npu->index);
 	dt_add_property_cells(dn, "ibm,phb-diag-data-size", 0);
 	dt_add_property_cells(dn, "ibm,opal-num-pes", NPU3_MAX_PE_NUM);
 	dt_add_property_cells(dn, "ibm,opal-reserved-pe", NPU3_RESERVED_PE_NUM);
diff --git a/include/npu2.h b/include/npu2.h
index 92b58988..47317c8c 100644
--- a/include/npu2.h
+++ b/include/npu2.h
@@ -53,6 +53,7 @@ 
 #define NPU2_OCAPI_PE(ndev) ((ndev)->brick_index + NPU2_MAX_PE_NUM)
 
 #define NPU2_LINKS_PER_CHIP 6
+#define NPU2_PHB_INDEX_BASE 7 // enough to avoid conflicts with PCI
 
 /* Link flags */
 #define NPU2_DEV_PCI_LINKED	0x1
@@ -175,7 +176,6 @@  struct npu2 {
 
 	/* NVLink */
 	struct phb	phb_nvlink;
-	uint32_t	phb_index;
 
 	/* OCAPI */
 	uint64_t	i2c_port_id_ocapi;
diff --git a/include/npu3.h b/include/npu3.h
index 1c657f94..417756c4 100644
--- a/include/npu3.h
+++ b/include/npu3.h
@@ -78,6 +78,8 @@  struct npu3_dev {
 	struct npu3_dev_nvlink	nvlink;
 };
 
+#define NPU3_PHB_INDEX_BASE 7 // enough to avoid conflicts with PCI
+
 struct npu3_nvlink {
 	struct phb		phb;
 	uint32_t		ctx_ref[NPU3_XTS_BDF_MAP_MAX];