diff mbox series

[RFC,06/12] xen-blkfront: add callbacks for PM suspend and hibernation

Message ID 20180612205619.28156-7-anchalag@amazon.com
State RFC, archived
Delegated to: David Miller
Headers show
Series Enable PM hibernation on guest VMs | expand

Commit Message

Anchal Agarwal June 12, 2018, 8:56 p.m. UTC
From: Munehisa Kamata <kamatam@amazon.com>

Add freeze and restore callbacks for PM suspend and hibernation support.
The freeze handler stops a block-layer queue and disconnect the frontend
from the backend while freeing ring_info and associated resources. The
restore handler re-allocates ring_info and re-connect to the backedend,
so the rest of the kernel can continue to use the block device
transparently.Also, the handlers are used for both PM
suspend and hibernation so that we can keep the existing suspend/resume
callbacks for Xen suspend without modification.
If a backend doesn't have commit 12ea729645ac ("xen/blkback: unmap all
persistent grants when frontend gets disconnected"), the frontend may see
massive amount of grant table warning when freeing resources.

 [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
 [   36.855089] xen:grant_table: WARNING: g.e. 0x112 still in use!

In this case, persistent grants would need to be disabled.

Ensure no reqs/rsps in rings before disconnecting. When disconnecting
the frontend from the backend in blkfront_freeze(), there still may be
unconsumed requests or responses in the rings, especially when the
backend is backed by network-based device. If the frontend gets
disconnected with such reqs/rsps remaining there, it can cause
grant warnings and/or losing reqs/rsps by freeing pages afterward.
This can lead resumed kernel into unrecoverable state like unexpected
freeing of grant page and/or hung task due to the lost reqs or rsps.
Therefore we have to ensure that there is no unconsumed requests or
responses before disconnecting.

Actually, the frontend just needs to wait for some amount of time so that
the backend can process the requests, put responses and notify the
frontend back. Timeout used here is based on some heuristic. If we somehow
hit the timeout, it would mean something serious happens in the backend,
the frontend will just return an error to PM core and PM
suspend/hibernation will be aborted. This may be something should be
fixed by the backend side, but a frontend side fix is probably
still worth doing to work with broader backends.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Reviewed-by: Munehisa Kamata <kamatam@amazon.com>
Reviewed-by: Eduardo Valentin <eduval@amazon.com>
---
 drivers/block/xen-blkfront.c | 158 +++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 151 insertions(+), 7 deletions(-)

Comments

Roger Pau Monné June 13, 2018, 8:24 a.m. UTC | #1
On Tue, Jun 12, 2018 at 08:56:13PM +0000, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>
> 
> Add freeze and restore callbacks for PM suspend and hibernation support.
> The freeze handler stops a block-layer queue and disconnect the frontend
> from the backend while freeing ring_info and associated resources. The
> restore handler re-allocates ring_info and re-connect to the backedend,
> so the rest of the kernel can continue to use the block device
> transparently.Also, the handlers are used for both PM
> suspend and hibernation so that we can keep the existing suspend/resume
> callbacks for Xen suspend without modification.
> If a backend doesn't have commit 12ea729645ac ("xen/blkback: unmap all
> persistent grants when frontend gets disconnected"), the frontend may see
> massive amount of grant table warning when freeing resources.
> 
>  [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
>  [   36.855089] xen:grant_table: WARNING: g.e. 0x112 still in use!
> 
> In this case, persistent grants would need to be disabled.
> 
> Ensure no reqs/rsps in rings before disconnecting. When disconnecting
> the frontend from the backend in blkfront_freeze(), there still may be
> unconsumed requests or responses in the rings, especially when the
> backend is backed by network-based device. If the frontend gets
> disconnected with such reqs/rsps remaining there, it can cause
> grant warnings and/or losing reqs/rsps by freeing pages afterward.

I'm not sure why having pending requests can cause grant warnings or
lose of requests. If handled properly this shouldn't be an issue.
Linux blkfront already does live migration (which also involves a
reconnection of the frontend) with pending requests and that doesn't
seem to be an issue.

> This can lead resumed kernel into unrecoverable state like unexpected
> freeing of grant page and/or hung task due to the lost reqs or rsps.
> Therefore we have to ensure that there is no unconsumed requests or
> responses before disconnecting.

Given that we have multiqueue, plus multipage rings, I'm not sure
waiting for the requests on the rings to complete is a good idea.

Why can't you just disconnect the frontend and requeue all the
requests in flight? When the frontend connects on resume those
requests will be queued again.

Thanks, Roger.
Roger Pau Monné June 14, 2018, 8:43 a.m. UTC | #2
Please try to avoid top posting.

On Wed, Jun 13, 2018 at 10:20:48PM +0000, Anchal Agarwal wrote:
> Hi Roger,
> To answer your question, due to the lack of mentioned commit
> (commit 12ea729645ac ("xen/blkback: unmap all persistent grants when
> frontend gets disconnected") in the older dom0 kernels(<3.2),resume from

This fix that you mention is only present in kernels >= 3.18 AFAICT,
and persistent grants where introduced in 3.8 (0a8704a51f38), so
anything < 3.8 should work fine. Not sure why you mention 3.2 here.

> hibernation can fail on guest side. In the absence of the commit,
> Persistant Grants are not unmapped immediately when frontend is 
> disconnected from backend and hence leave the block device in an 
> inconsistent state. To avoid this unstability and work with larger set 
> of kernel versions, this approach had been used. Once you don't have 
> any pending req/resp it is safer for guest to resume from hibernation.

I think the fix should be backported (if it hasn't been done yet) to
kernels between 3.8 and 3.18. I don't like to add all this code just
to work around a Linux backend kernel bug.

AFAICT if persistent grants work as expected you could use almost the
same path that's used for migration, greatly reducing the amount of
code that you need to add.

Thanks, Roger.
diff mbox series

Patch

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index ae00a82f350b..a223864c2220 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -46,6 +46,8 @@ 
 #include <linux/scatterlist.h>
 #include <linux/bitmap.h>
 #include <linux/list.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -78,6 +80,8 @@  enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_FREEZING,
+	BLKIF_STATE_FROZEN
 };
 
 struct grant {
@@ -216,6 +220,7 @@  struct blkfront_info
 	/* Save uncomplete reqs and bios for migration. */
 	struct list_head requests;
 	struct bio_list bio_list;
+	struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -262,6 +267,16 @@  static DEFINE_SPINLOCK(minor_lock);
 static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
 static void blkfront_gather_backend_features(struct blkfront_info *info);
 static int negotiate_mq(struct blkfront_info *info);
+static void __blkif_free(struct blkfront_info *info);
+
+static inline bool blkfront_ring_is_busy(struct blkif_front_ring *ring)
+{
+       if (RING_SIZE(ring) > RING_FREE_REQUESTS(ring) ||
+		RING_HAS_UNCONSUMED_RESPONSES(ring))
+		return true;
+	else
+		return false;
+}
 
 static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
 {
@@ -996,6 +1011,7 @@  static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info);
+	init_completion(&info->wait_backend_disconnected);
 
 	return 0;
 }
@@ -1219,6 +1235,8 @@  static void xlvbd_release_gendisk(struct blkfront_info *info)
 /* Already hold rinfo->ring_lock. */
 static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
 {
+	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
+		return;
 	if (!RING_FULL(&rinfo->ring))
 		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
 }
@@ -1342,8 +1360,6 @@  static void blkif_free_ring(struct blkfront_ring_info *rinfo)
 
 static void blkif_free(struct blkfront_info *info, int suspend)
 {
-	unsigned int i;
-
 	/* Prevent new requests being issued until we fix things up. */
 	info->connected = suspend ?
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1351,6 +1367,13 @@  static void blkif_free(struct blkfront_info *info, int suspend)
 	if (info->rq)
 		blk_mq_stop_hw_queues(info->rq);
 
+	__blkif_free(info);
+}
+
+static void __blkif_free(struct blkfront_info *info)
+{
+	unsigned int i;
+
 	for (i = 0; i < info->nr_rings; i++)
 		blkif_free_ring(&info->rinfo[i]);
 
@@ -1554,8 +1577,10 @@  static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
 	struct blkfront_info *info = rinfo->dev_info;
 
-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
-		return IRQ_HANDLED;
+	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
+		if (info->connected != BLKIF_STATE_FREEZING)
+			return IRQ_HANDLED;
+	}
 
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
@@ -2005,6 +2030,7 @@  static int blkif_recover(struct blkfront_info *info)
 	struct bio *bio;
 	unsigned int segs;
 
+	bool frozen = info->connected == BLKIF_STATE_FROZEN;
 	blkfront_gather_backend_features(info);
 	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
 	blkif_set_queue_limits(info);
@@ -2031,6 +2057,9 @@  static int blkif_recover(struct blkfront_info *info)
 		kick_pending_request_queues(rinfo);
 	}
 
+	if (frozen)
+		return 0;
+
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
@@ -2335,6 +2364,7 @@  static void blkfront_connect(struct blkfront_info *info)
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
+	case BLKIF_STATE_FROZEN:
 		/*
 		 * If we are recovering from suspension, we need to wait
 		 * for the backend to announce it's features before
@@ -2452,12 +2482,37 @@  static void blkback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+			if (info->connected == BLKIF_STATE_FREEZING) {
+				__blkif_free(info);
+				info->connected = BLKIF_STATE_FROZEN;
+				complete(&info->wait_backend_disconnected);
+				break;
+			}
+
+			break;
+		}
+
+		/*
+		 * We may somehow receive backend's Closed again while thawing
+		 * or restoring and it causes thawing or restoring to fail.
+		 * Ignore such unexpected state anyway.
+		 */
+		if (info->connected == BLKIF_STATE_FROZEN &&
+				dev->state == XenbusStateInitialised) {
+			dev_dbg(&dev->dev,
+					"ignore the backend's Closed state: %s",
+					dev->nodename);
 			break;
+		}
 		/* fall through */
 	case XenbusStateClosing:
-		if (info)
-			blkfront_closing(info);
+		if (info) {
+			if (info->connected == BLKIF_STATE_FREEZING)
+				xenbus_frontend_closed(dev);
+			else
+				blkfront_closing(info);
+		}
 		break;
 	}
 }
@@ -2594,6 +2649,92 @@  static void blkif_release(struct gendisk *disk, fmode_t mode)
 	mutex_unlock(&blkfront_mutex);
 }
 
+static int blkfront_freeze(struct xenbus_device *dev)
+{
+	unsigned int i;
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	struct blkfront_ring_info *rinfo;
+	struct blkif_front_ring *ring;
+	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
+	unsigned int timeout = 5 * HZ;
+	int err = 0;
+
+	info->connected = BLKIF_STATE_FREEZING;
+
+	blk_mq_stop_hw_queues(info->rq);
+
+	for (i = 0; i < info->nr_rings; i++) {
+		rinfo = &info->rinfo[i];
+
+		gnttab_cancel_free_callback(&rinfo->callback);
+		flush_work(&rinfo->work);
+	}
+
+	for (i = 0; i < info->nr_rings; i++) {
+		spinlock_t *lock;
+		bool busy;
+		unsigned long req_timeout_ms = 25;
+		unsigned long ring_timeout;
+
+		rinfo = &info->rinfo[i];
+		ring = &rinfo->ring;
+
+		lock = &rinfo->ring_lock;
+
+		ring_timeout = jiffies +
+			msecs_to_jiffies(req_timeout_ms * RING_SIZE(ring));
+
+		do {
+			spin_lock_irq(lock);
+			busy = blkfront_ring_is_busy(ring);
+			spin_unlock_irq(lock);
+
+			if (busy)
+				msleep(req_timeout_ms);
+			else
+				break;
+		} while (time_is_after_jiffies(ring_timeout));
+
+		/* Timed out */
+		if (busy) {
+			xenbus_dev_error(dev, err, "the ring is still busy");
+			info->connected = BLKIF_STATE_CONNECTED;
+			return -EBUSY;
+		}
+	}
+
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/*
+	 * We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+	}
+
+	return err;
+}
+
+static int blkfront_restore(struct xenbus_device *dev)
+{
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	int err = 0;
+
+	err = talk_to_blkback(dev, info);
+	if (err)
+		goto out;
+	blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
+
+out:
+	return err;
+}
+
 static const struct block_device_operations xlvbd_block_fops =
 {
 	.owner = THIS_MODULE,
@@ -2616,6 +2757,9 @@  static struct xenbus_driver blkfront_driver = {
 	.resume = blkfront_resume,
 	.otherend_changed = blkback_changed,
 	.is_ready = blkfront_is_ready,
+	.freeze = blkfront_freeze,
+	.thaw = blkfront_restore,
+	.restore = blkfront_restore
 };
 
 static int __init xlblk_init(void)