diff mbox series

[v2,5/6] PCI: hv: hv_pci_devices_present(): only queue a new work when necessary

Message ID 20180305192134.32207-6-decui@microsoft.com
State Superseded
Delegated to: Lorenzo Pieralisi
Headers show
Series some fixes to the pci-hyperv driver. | expand

Commit Message

Dexuan Cui March 5, 2018, 7:22 p.m. UTC
If there is a pending work, we just need to add the new dr into
the dr_list.

This is suggested by Michael Kelley.

Signed-off-by: Dexuan Cui <decui@microsoft.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Jack Morgenstein <jackm@mellanox.com>
Cc: stable@vger.kernel.org
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: K. Y. Srinivasan <kys@microsoft.com>
Cc: Michael Kelley (EOSG) <Michael.H.Kelley@microsoft.com>
---
 drivers/pci/host/pci-hyperv.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

Comments

Michael Kelley (EOSG) March 5, 2018, 11:47 p.m. UTC | #1
> -----Original Message-----
> From: Dexuan Cui
> Sent: Monday, March 5, 2018 11:22 AM
> To: bhelgaas@google.com; linux-pci@vger.kernel.org; KY Srinivasan <kys@microsoft.com>;
> Stephen Hemminger <sthemmin@microsoft.com>; olaf@aepfle.de; apw@canonical.com;
> jasowang@redhat.com
> Cc: linux-kernel@vger.kernel.org; driverdev-devel@linuxdriverproject.org; Haiyang Zhang
> <haiyangz@microsoft.com>; vkuznets@redhat.com; marcelo.cerri@canonical.com; Michael
> Kelley (EOSG) <Michael.H.Kelley@microsoft.com>; Dexuan Cui <decui@microsoft.com>; Jack
> Morgenstein <jackm@mellanox.com>; stable@vger.kernel.org
> Subject: [PATCH v2 5/6] PCI: hv: hv_pci_devices_present(): only queue a new work when
> necessary
> 
> If there is a pending work, we just need to add the new dr into
> the dr_list.
> 
> This is suggested by Michael Kelley.
> 
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Jack Morgenstein <jackm@mellanox.com>
> Cc: stable@vger.kernel.org
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: K. Y. Srinivasan <kys@microsoft.com>
> Cc: Michael Kelley (EOSG) <Michael.H.Kelley@microsoft.com>
> ---
>  drivers/pci/host/pci-hyperv.c | 19 ++++++++++++++++---
>  1 file changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
> index 3a385212f666..d3aa6736a9bb 100644
> --- a/drivers/pci/host/pci-hyperv.c
> +++ b/drivers/pci/host/pci-hyperv.c
> @@ -1733,6 +1733,7 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
>  	struct hv_dr_state *dr;
>  	struct hv_dr_work *dr_wrk;
>  	unsigned long flags;
> +	bool pending_dr;
> 
>  	dr_wrk = kzalloc(sizeof(*dr_wrk), GFP_NOWAIT);
>  	if (!dr_wrk)
> @@ -1756,11 +1757,23 @@ static void hv_pci_devices_present(struct hv_pcibus_device
> *hbus,
>  	}
> 
>  	spin_lock_irqsave(&hbus->device_list_lock, flags);
> +
> +	/*
> +	 * If pending_dr is true, we have already queued a work,
> +	 * which will see the new dr. Otherwise, we need to
> +	 * queue a new work.
> +	 */
> +	pending_dr = !list_empty(&hbus->dr_list);
>  	list_add_tail(&dr->list_entry, &hbus->dr_list);
> -	spin_unlock_irqrestore(&hbus->device_list_lock, flags);

A minor point:  The spin_unlock_irqrestore() call can
stay here.   Once we have the list status in a local variable
and the new entry is added to the list, nothing bad can
happen if we drop the spin lock.   At worst, and very unlikely,
we'll queue work when some other thread has already queued
work to process the list entry, but that's no big deal.   I'd argue
for keeping the code covered by a spin lock as small as possible.

Michael

> 
> -	get_hvpcibus(hbus);
> -	queue_work(hbus->wq, &dr_wrk->wrk);
> +	if (pending_dr) {
> +		kfree(dr_wrk);
> +	} else {
> +		get_hvpcibus(hbus);
> +		queue_work(hbus->wq, &dr_wrk->wrk);
> +	}
> +
> +	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
>  }
> 
>  /**
> --
> 2.7.4
Dexuan Cui March 6, 2018, 12:17 a.m. UTC | #2
> From: Michael Kelley (EOSG)
> Sent: Monday, March 5, 2018 15:48
> > @@ -1756,11 +1757,23 @@ static void hv_pci_devices_present(struct
> hv_pcibus_device
> > *hbus,
> >  	}
> >
> >  	spin_lock_irqsave(&hbus->device_list_lock, flags);
> > +
> > +	/*
> > +	 * If pending_dr is true, we have already queued a work,
> > +	 * which will see the new dr. Otherwise, we need to
> > +	 * queue a new work.
> > +	 */
> > +	pending_dr = !list_empty(&hbus->dr_list);
> >  	list_add_tail(&dr->list_entry, &hbus->dr_list);
> > -	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
> 
> A minor point:  The spin_unlock_irqrestore() call can
> stay here.   Once we have the list status in a local variable
> and the new entry is added to the list, nothing bad can
> happen if we drop the spin lock.   At worst, and very unlikely,
> we'll queue work when some other thread has already queued
> work to process the list entry, but that's no big deal.   I'd argue
> for keeping the code covered by a spin lock as small as possible.
> 
> Michael

I agree.  Will fix this in v3.

> >
> > -	get_hvpcibus(hbus);
> > -	queue_work(hbus->wq, &dr_wrk->wrk);
> > +	if (pending_dr) {
> > +		kfree(dr_wrk);
> > +	} else {
> > +		get_hvpcibus(hbus);
> > +		queue_work(hbus->wq, &dr_wrk->wrk);
> > +	}
> > +
> > +	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
> >  }

To receive more comments from others,  I'll hold off v3 until tomorrow.

Thanks,
-- Dexuan
diff mbox series

Patch

diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
index 3a385212f666..d3aa6736a9bb 100644
--- a/drivers/pci/host/pci-hyperv.c
+++ b/drivers/pci/host/pci-hyperv.c
@@ -1733,6 +1733,7 @@  static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
 	struct hv_dr_state *dr;
 	struct hv_dr_work *dr_wrk;
 	unsigned long flags;
+	bool pending_dr;
 
 	dr_wrk = kzalloc(sizeof(*dr_wrk), GFP_NOWAIT);
 	if (!dr_wrk)
@@ -1756,11 +1757,23 @@  static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
 	}
 
 	spin_lock_irqsave(&hbus->device_list_lock, flags);
+
+	/*
+	 * If pending_dr is true, we have already queued a work,
+	 * which will see the new dr. Otherwise, we need to
+	 * queue a new work.
+	 */
+	pending_dr = !list_empty(&hbus->dr_list);
 	list_add_tail(&dr->list_entry, &hbus->dr_list);
-	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
 
-	get_hvpcibus(hbus);
-	queue_work(hbus->wq, &dr_wrk->wrk);
+	if (pending_dr) {
+		kfree(dr_wrk);
+	} else {
+		get_hvpcibus(hbus);
+		queue_work(hbus->wq, &dr_wrk->wrk);
+	}
+
+	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
 }
 
 /**