diff mbox series

[iwl-net,3/3] ice: map XDP queues to vectors in ice_vsi_map_rings_to_vectors()

Message ID 20240515160246.5181-4-larysa.zaremba@intel.com
State Under Review
Delegated to: Anthony Nguyen
Headers show
Series Fix AF_XDP problems after changing queue number | expand

Commit Message

Larysa Zaremba May 15, 2024, 4:02 p.m. UTC
ice_pf_dcb_recfg() re-maps queues to vectors with
ice_vsi_map_rings_to_vectors(), which does not restore the previous
state for XDP queues. This leads to no AF_XDP traffic after rebuild.

Map XDP queues to vectors in ice_vsi_map_rings_to_vectors().
Also, move the code around, so XDP queues are mapped independently only
through .ndo_bpf().

Fixes: 6624e780a577 ("ice: split ice_vsi_setup into smaller functions")
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h      |  1 +
 drivers/net/ethernet/intel/ice/ice_base.c |  3 +
 drivers/net/ethernet/intel/ice/ice_lib.c  | 14 ++--
 drivers/net/ethernet/intel/ice/ice_main.c | 96 ++++++++++++++---------
 4 files changed, 68 insertions(+), 46 deletions(-)

Comments

Simon Horman May 16, 2024, 8:27 a.m. UTC | #1
On Wed, May 15, 2024 at 06:02:16PM +0200, Larysa Zaremba wrote:
> ice_pf_dcb_recfg() re-maps queues to vectors with
> ice_vsi_map_rings_to_vectors(), which does not restore the previous
> state for XDP queues. This leads to no AF_XDP traffic after rebuild.
> 
> Map XDP queues to vectors in ice_vsi_map_rings_to_vectors().
> Also, move the code around, so XDP queues are mapped independently only
> through .ndo_bpf().

Hi Larysa,

I take it the last sentence refers to the placement of ice_map_xdp_rings()
in ice_prepare_xdp_rings() after rather than before the
(cfg_type == ICE_XDP_CFG_PART) condition.

If so, I see that it is a small change. But I do wonder if it is separate
from fixing the issue described in the first paragraph. And thus would
be better as a separate patch.

Also, (I'm raising a separate issue :) breaking out logic into
ice_xdp_ring_from_qid() seems very nice.  But I wonder if this ought to be
part of a cleanup-patch for 'iwl' rather than a fixes patch for 'iwl-next'.

OTOH, I do see that breaking out ice_map_xdp_rings() makes sense in the
context of this fix as the same logic is to be called in two places.

Splitting patches aside, the resulting code looks good to me.

...
Larysa Zaremba May 16, 2024, 11:43 a.m. UTC | #2
On Thu, May 16, 2024 at 09:27:13AM +0100, Simon Horman wrote:
> On Wed, May 15, 2024 at 06:02:16PM +0200, Larysa Zaremba wrote:
> > ice_pf_dcb_recfg() re-maps queues to vectors with
> > ice_vsi_map_rings_to_vectors(), which does not restore the previous
> > state for XDP queues. This leads to no AF_XDP traffic after rebuild.
> > 
> > Map XDP queues to vectors in ice_vsi_map_rings_to_vectors().
> > Also, move the code around, so XDP queues are mapped independently only
> > through .ndo_bpf().
> 
> Hi Larysa,
> 
> I take it the last sentence refers to the placement of ice_map_xdp_rings()
> in ice_prepare_xdp_rings() after rather than before the
> (cfg_type == ICE_XDP_CFG_PART) condition.
> 
> If so, I see that it is a small change. But I do wonder if it is separate
> from fixing the issue described in the first paragraph. And thus would
> be better as a separate patch.

This is not neccessary for the fix to work, but I think this is intergral to
making the change properly. I mean, before the change in the rebuild path we map
XDP rings to vectors only once and after the change we do this only once, just
previously it was in ice_prepare_xdp_rings() and now it is in
ice_vsi_map_rings_to_vectors().

> 
> Also, (I'm raising a separate issue :) breaking out logic into
> ice_xdp_ring_from_qid() seems very nice.  But I wonder if this ought to be
> part of a cleanup-patch for 'iwl' rather than a fixes patch for 'iwl-next'.
>

I have separated this into a separate function, because 2 lines exceeded 80 
characters, which is not in line with our current style for drivers.
And I do not think that this small function creates any more additional 
potentian applying problems for this patch. And the change is small enough to 
see that the logic stays the same.

> OTOH, I do see that breaking out ice_map_xdp_rings() makes sense in the
> context of this fix as the same logic is to be called in two places.
> 
> Splitting patches aside, the resulting code looks good to me.
> 
> ...
>
Simon Horman May 16, 2024, 11:59 a.m. UTC | #3
On Thu, May 16, 2024 at 01:43:18PM +0200, Larysa Zaremba wrote:
> On Thu, May 16, 2024 at 09:27:13AM +0100, Simon Horman wrote:
> > On Wed, May 15, 2024 at 06:02:16PM +0200, Larysa Zaremba wrote:
> > > ice_pf_dcb_recfg() re-maps queues to vectors with
> > > ice_vsi_map_rings_to_vectors(), which does not restore the previous
> > > state for XDP queues. This leads to no AF_XDP traffic after rebuild.
> > > 
> > > Map XDP queues to vectors in ice_vsi_map_rings_to_vectors().
> > > Also, move the code around, so XDP queues are mapped independently only
> > > through .ndo_bpf().
> > 
> > Hi Larysa,
> > 
> > I take it the last sentence refers to the placement of ice_map_xdp_rings()
> > in ice_prepare_xdp_rings() after rather than before the
> > (cfg_type == ICE_XDP_CFG_PART) condition.
> > 
> > If so, I see that it is a small change. But I do wonder if it is separate
> > from fixing the issue described in the first paragraph. And thus would
> > be better as a separate patch.
> 
> This is not neccessary for the fix to work, but I think this is intergral to
> making the change properly. I mean, before the change in the rebuild path we map
> XDP rings to vectors only once and after the change we do this only once, just
> previously it was in ice_prepare_xdp_rings() and now it is in
> ice_vsi_map_rings_to_vectors().
> 
> > 
> > Also, (I'm raising a separate issue :) breaking out logic into
> > ice_xdp_ring_from_qid() seems very nice.  But I wonder if this ought to be
> > part of a cleanup-patch for 'iwl' rather than a fixes patch for 'iwl-next'.
> >
> 
> I have separated this into a separate function, because 2 lines exceeded 80 
> characters, which is not in line with our current style for drivers.
> And I do not think that this small function creates any more additional 
> potentian applying problems for this patch. And the change is small enough to 
> see that the logic stays the same.
> 
> > OTOH, I do see that breaking out ice_map_xdp_rings() makes sense in the
> > context of this fix as the same logic is to be called in two places.
> > 
> > Splitting patches aside, the resulting code looks good to me.
> > 
> > ...

Hi Larysa,

Thanks for your explanation, this all seems reasonable to me.

Reviewed-by: Simon Horman <horms@kernel.org>
Jacob Keller May 16, 2024, 5:12 p.m. UTC | #4
On 5/16/2024 4:59 AM, Simon Horman wrote:
> On Thu, May 16, 2024 at 01:43:18PM +0200, Larysa Zaremba wrote:
>> On Thu, May 16, 2024 at 09:27:13AM +0100, Simon Horman wrote:
>>> On Wed, May 15, 2024 at 06:02:16PM +0200, Larysa Zaremba wrote:
>>>> ice_pf_dcb_recfg() re-maps queues to vectors with
>>>> ice_vsi_map_rings_to_vectors(), which does not restore the previous
>>>> state for XDP queues. This leads to no AF_XDP traffic after rebuild.
>>>>
>>>> Map XDP queues to vectors in ice_vsi_map_rings_to_vectors().
>>>> Also, move the code around, so XDP queues are mapped independently only
>>>> through .ndo_bpf().
>>>
>>> Hi Larysa,
>>>
>>> I take it the last sentence refers to the placement of ice_map_xdp_rings()
>>> in ice_prepare_xdp_rings() after rather than before the
>>> (cfg_type == ICE_XDP_CFG_PART) condition.
>>>
>>> If so, I see that it is a small change. But I do wonder if it is separate
>>> from fixing the issue described in the first paragraph. And thus would
>>> be better as a separate patch.
>>
>> This is not neccessary for the fix to work, but I think this is intergral to
>> making the change properly. I mean, before the change in the rebuild path we map
>> XDP rings to vectors only once and after the change we do this only once, just
>> previously it was in ice_prepare_xdp_rings() and now it is in
>> ice_vsi_map_rings_to_vectors().
>>
>>>
>>> Also, (I'm raising a separate issue :) breaking out logic into
>>> ice_xdp_ring_from_qid() seems very nice.  But I wonder if this ought to be
>>> part of a cleanup-patch for 'iwl' rather than a fixes patch for 'iwl-next'.
>>>
>>
>> I have separated this into a separate function, because 2 lines exceeded 80 
>> characters, which is not in line with our current style for drivers.
>> And I do not think that this small function creates any more additional 
>> potentian applying problems for this patch. And the change is small enough to 
>> see that the logic stays the same.
>>
>>> OTOH, I do see that breaking out ice_map_xdp_rings() makes sense in the
>>> context of this fix as the same logic is to be called in two places.
>>>
>>> Splitting patches aside, the resulting code looks good to me.
>>>
>>> ...
> 
> Hi Larysa,
> 
> Thanks for your explanation, this all seems reasonable to me.
> 
> Reviewed-by: Simon Horman <horms@kernel.org>
> 

Makes sense to me. With the context that it helps ensure we get the fix
right, we should send it together to net.

Thanks,
Jake
diff mbox series

Patch

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index b91b2594b29d..da8c8afebc93 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -940,6 +940,7 @@  int ice_vsi_determine_xdp_res(struct ice_vsi *vsi);
 int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
 			  enum ice_xdp_cfg cfg_type);
 int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type);
+void ice_map_xdp_rings(struct ice_vsi *vsi);
 int
 ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
 	     u32 flags);
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 687f6cb2b917..5d396c1a7731 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -842,6 +842,9 @@  void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi)
 		}
 		rx_rings_rem -= rx_rings_per_v;
 	}
+
+	if (ice_is_xdp_ena_vsi(vsi))
+		ice_map_xdp_rings(vsi);
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index dd8b374823ee..7629b0190578 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2274,13 +2274,6 @@  static int ice_vsi_cfg_def(struct ice_vsi *vsi)
 		if (ret)
 			goto unroll_vector_base;
 
-		ice_vsi_map_rings_to_vectors(vsi);
-
-		/* Associate q_vector rings to napi */
-		ice_vsi_set_napi_queues(vsi);
-
-		vsi->stat_offsets_loaded = false;
-
 		if (ice_is_xdp_ena_vsi(vsi)) {
 			ret = ice_vsi_determine_xdp_res(vsi);
 			if (ret)
@@ -2291,6 +2284,13 @@  static int ice_vsi_cfg_def(struct ice_vsi *vsi)
 				goto unroll_vector_base;
 		}
 
+		ice_vsi_map_rings_to_vectors(vsi);
+
+		/* Associate q_vector rings to napi */
+		ice_vsi_set_napi_queues(vsi);
+
+		vsi->stat_offsets_loaded = false;
+
 		/* ICE_VSI_CTRL does not need RSS so skip RSS processing */
 		if (vsi->type != ICE_VSI_CTRL)
 			/* Do not exit if configuring RSS had an issue, at
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 2a270aacd24a..1b61ca3a6eb6 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -2707,6 +2707,60 @@  static void ice_vsi_assign_bpf_prog(struct ice_vsi *vsi, struct bpf_prog *prog)
 		bpf_prog_put(old_prog);
 }
 
+static struct ice_tx_ring *ice_xdp_ring_from_qid(struct ice_vsi *vsi, int qid)
+{
+	struct ice_q_vector *q_vector;
+	struct ice_tx_ring *ring;
+
+	if (static_key_enabled(&ice_xdp_locking_key))
+		return vsi->xdp_rings[qid % vsi->num_xdp_txq];
+
+	q_vector = vsi->rx_rings[qid]->q_vector;
+	ice_for_each_tx_ring(ring, q_vector->tx)
+		if (ice_ring_is_xdp(ring))
+			return ring;
+
+	return NULL;
+}
+
+/**
+ * ice_map_xdp_rings - Map XDP rings to interrupt vectors
+ * @vsi: the VSI with XDP rings being configured
+ *
+ * Map XDP rings to interrupt vectors and perform the configuration steps
+ * dependent on the mapping.
+ */
+void ice_map_xdp_rings(struct ice_vsi *vsi)
+{
+	int xdp_rings_rem = vsi->num_xdp_txq;
+	int v_idx, q_idx;
+
+	/* follow the logic from ice_vsi_map_rings_to_vectors */
+	ice_for_each_q_vector(vsi, v_idx) {
+		struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
+		int xdp_rings_per_v, q_id, q_base;
+
+		xdp_rings_per_v = DIV_ROUND_UP(xdp_rings_rem,
+					       vsi->num_q_vectors - v_idx);
+		q_base = vsi->num_xdp_txq - xdp_rings_rem;
+
+		for (q_id = q_base; q_id < (q_base + xdp_rings_per_v); q_id++) {
+			struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_id];
+
+			xdp_ring->q_vector = q_vector;
+			xdp_ring->next = q_vector->tx.tx_ring;
+			q_vector->tx.tx_ring = xdp_ring;
+		}
+		xdp_rings_rem -= xdp_rings_per_v;
+	}
+
+	ice_for_each_rxq(vsi, q_idx) {
+		vsi->rx_rings[q_idx]->xdp_ring = ice_xdp_ring_from_qid(vsi,
+								       q_idx);
+		ice_tx_xsk_pool(vsi, q_idx);
+	}
+}
+
 /**
  * ice_prepare_xdp_rings - Allocate, configure and setup Tx rings for XDP
  * @vsi: VSI to bring up Tx rings used by XDP
@@ -2719,7 +2773,6 @@  int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
 			  enum ice_xdp_cfg cfg_type)
 {
 	u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
-	int xdp_rings_rem = vsi->num_xdp_txq;
 	struct ice_pf *pf = vsi->back;
 	struct ice_qs_cfg xdp_qs_cfg = {
 		.qs_mutex = &pf->avail_q_mutex,
@@ -2732,8 +2785,7 @@  int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
 		.mapping_mode = ICE_VSI_MAP_CONTIG
 	};
 	struct device *dev;
-	int i, v_idx;
-	int status;
+	int status, i;
 
 	dev = ice_pf_to_dev(pf);
 	vsi->xdp_rings = devm_kcalloc(dev, vsi->num_xdp_txq,
@@ -2752,42 +2804,6 @@  int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
 	if (ice_xdp_alloc_setup_rings(vsi))
 		goto clear_xdp_rings;
 
-	/* follow the logic from ice_vsi_map_rings_to_vectors */
-	ice_for_each_q_vector(vsi, v_idx) {
-		struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
-		int xdp_rings_per_v, q_id, q_base;
-
-		xdp_rings_per_v = DIV_ROUND_UP(xdp_rings_rem,
-					       vsi->num_q_vectors - v_idx);
-		q_base = vsi->num_xdp_txq - xdp_rings_rem;
-
-		for (q_id = q_base; q_id < (q_base + xdp_rings_per_v); q_id++) {
-			struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_id];
-
-			xdp_ring->q_vector = q_vector;
-			xdp_ring->next = q_vector->tx.tx_ring;
-			q_vector->tx.tx_ring = xdp_ring;
-		}
-		xdp_rings_rem -= xdp_rings_per_v;
-	}
-
-	ice_for_each_rxq(vsi, i) {
-		if (static_key_enabled(&ice_xdp_locking_key)) {
-			vsi->rx_rings[i]->xdp_ring = vsi->xdp_rings[i % vsi->num_xdp_txq];
-		} else {
-			struct ice_q_vector *q_vector = vsi->rx_rings[i]->q_vector;
-			struct ice_tx_ring *ring;
-
-			ice_for_each_tx_ring(ring, q_vector->tx) {
-				if (ice_ring_is_xdp(ring)) {
-					vsi->rx_rings[i]->xdp_ring = ring;
-					break;
-				}
-			}
-		}
-		ice_tx_xsk_pool(vsi, i);
-	}
-
 	/* omit the scheduler update if in reset path; XDP queues will be
 	 * taken into account at the end of ice_vsi_rebuild, where
 	 * ice_cfg_vsi_lan is being called
@@ -2795,6 +2811,8 @@  int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
 	if (cfg_type == ICE_XDP_CFG_PART)
 		return 0;
 
+	ice_map_xdp_rings(vsi);
+
 	/* tell the Tx scheduler that right now we have
 	 * additional queues
 	 */