diff mbox series

[net,V2,05/15] net/mlx5: Add retry mechanism to the command entry index allocation

Message ID 20201001195247.66636-6-saeed@kernel.org
State Changes Requested
Delegated to: David Miller
Headers show
Series [net,V2,01/15] net/mlx5: Don't allow health work when device is uninitialized | expand

Commit Message

Saeed Mahameed Oct. 1, 2020, 7:52 p.m. UTC
From: Eran Ben Elisha <eranbe@nvidia.com>

It is possible that new command entry index allocation will temporarily
fail. The new command holds the semaphore, so it means that a free entry
should be ready soon. Add one second retry mechanism before returning an
error.

Patch "net/mlx5: Avoid possible free of command entry while timeout comp
handler" increase the possibility to bump into this temporarily failure
as it delays the entry index release for non-callback commands.

Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 21 ++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

Comments

Jakub Kicinski Oct. 1, 2020, 11:23 p.m. UTC | #1
On Thu,  1 Oct 2020 12:52:37 -0700 saeed@kernel.org wrote:
> +static int cmd_alloc_index_retry(struct mlx5_cmd *cmd)
> +{
> +	unsigned long alloc_end = jiffies + msecs_to_jiffies(1000);
> +	int idx;
> +
> +retry:
> +	idx = cmd_alloc_index(cmd);
> +	if (idx < 0 && time_before(jiffies, alloc_end)) {
> +		/* Index allocation can fail on heavy load of commands. This is a temporary
> +		 * situation as the current command already holds the semaphore, meaning that
> +		 * another command completion is being handled and it is expected to release
> +		 * the entry index soon.
> +		 */
> +		cond_resched();
> +		goto retry;
> +	}
> +	return idx;
> +}

This looks excessive. At least add some cpu_relax(), or udelay()?
Saeed Mahameed Oct. 2, 2020, 5:03 p.m. UTC | #2
On Thu, 2020-10-01 at 16:23 -0700, Jakub Kicinski wrote:
> On Thu,  1 Oct 2020 12:52:37 -0700 saeed@kernel.org wrote:
> > +static int cmd_alloc_index_retry(struct mlx5_cmd *cmd)
> > +{
> > +	unsigned long alloc_end = jiffies + msecs_to_jiffies(1000);
> > +	int idx;
> > +
> > +retry:
> > +	idx = cmd_alloc_index(cmd);
> > +	if (idx < 0 && time_before(jiffies, alloc_end)) {
> > +		/* Index allocation can fail on heavy load of commands.
> > This is a temporary
> > +		 * situation as the current command already holds the
> > semaphore, meaning that
> > +		 * another command completion is being handled and it
> > is expected to release
> > +		 * the entry index soon.
> > +		 */
> > +		cond_resched();
> > +		goto retry;
> > +	}
> > +	return idx;
> > +}
> 
> This looks excessive. At least add some cpu_relax(), or udelay()?

cpu_relax() should also work fine, it is just that we have 100%
certainty that the allocation will success real soon.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index 65ae6ef2039e..4b54c9241fd7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -883,6 +883,25 @@  static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode)
 	return cmd->allowed_opcode == opcode;
 }
 
+static int cmd_alloc_index_retry(struct mlx5_cmd *cmd)
+{
+	unsigned long alloc_end = jiffies + msecs_to_jiffies(1000);
+	int idx;
+
+retry:
+	idx = cmd_alloc_index(cmd);
+	if (idx < 0 && time_before(jiffies, alloc_end)) {
+		/* Index allocation can fail on heavy load of commands. This is a temporary
+		 * situation as the current command already holds the semaphore, meaning that
+		 * another command completion is being handled and it is expected to release
+		 * the entry index soon.
+		 */
+		cond_resched();
+		goto retry;
+	}
+	return idx;
+}
+
 static void cmd_work_handler(struct work_struct *work)
 {
 	struct mlx5_cmd_work_ent *ent = container_of(work, struct mlx5_cmd_work_ent, work);
@@ -900,7 +919,7 @@  static void cmd_work_handler(struct work_struct *work)
 	sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
 	down(sem);
 	if (!ent->page_queue) {
-		alloc_ret = cmd_alloc_index(cmd);
+		alloc_ret = cmd_alloc_index_retry(cmd);
 		if (alloc_ret < 0) {
 			mlx5_core_err_rl(dev, "failed to allocate command entry\n");
 			if (ent->callback) {