From patchwork Wed Nov 4 12:05:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 1393855 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CR53P5QhLz9sVN; Wed, 4 Nov 2020 23:06:25 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kaHYU-00009G-7g; Wed, 04 Nov 2020 12:06:22 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kaHYN-00007Z-Ue for kernel-team@lists.ubuntu.com; Wed, 04 Nov 2020 12:06:15 +0000 Received: from mail-wm1-f70.google.com ([209.85.128.70]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kaHYN-0004K5-Ns for kernel-team@lists.ubuntu.com; Wed, 04 Nov 2020 12:06:15 +0000 Received: by mail-wm1-f70.google.com with SMTP id z7so1123921wme.8 for ; Wed, 04 Nov 2020 04:06:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0EaoQyl1G/mFiYVjSa4RNV3to0bL5VWr80fA6Vl+A/M=; b=DHCwkN34jkEwORn7P9Ip1pjHURsCuYzj1WZgvOFhUXtrigAmG0dGx05/qQYIy8c+ej ekANUz89/TnVADubsG3W5kUkXUZXzSkweWs3/qVXdfgRzcBlDe0Q+MksWZDWpT1vjC5F RSsXyhZUU2cN6QPMoSXqi4tDdFejiOocyGdiRagiYpUOOjUxwEi/wAreRprsEoVvTAWh 2Le8BQL4GpS0arBeHwWFkBi5wPArEP+nBQegPC/fK1eg4jXxamrbeBrTQcNClo9pALHY /mp+wTgQDXOM2m41urQgU83tYY57mxdsO/8CtndfpwJa7ON8QxCxbOKkQvz4AH/9v0MD u96A== X-Gm-Message-State: AOAM533nz4NFa6FlhjD5t9+GKWEDBjCOjhONV1b51VIGo13Zln5hjXnm 7ztiwNqLsiZ5xY/9KnNi39McJ/3VBqSSqbFWfSAfehgpdnRvwn58vaO96kZB+HJ3J15IBd5cGj3 XJfl3cNwhgWzudw5bf7N6nGxZsgmB81TxVZcpVSSUvg== X-Received: by 2002:adf:ef0d:: with SMTP id e13mr31544132wro.24.1604491575185; Wed, 04 Nov 2020 04:06:15 -0800 (PST) X-Google-Smtp-Source: ABdhPJwDP3Pw86fypmObZUUvZ1i/RcEruW+P2lFNhjTR4IydJVfwucCBE4k05eOh888jwZAzvMz/+A== X-Received: by 2002:adf:ef0d:: with SMTP id e13mr31544108wro.24.1604491574959; Wed, 04 Nov 2020 04:06:14 -0800 (PST) Received: from xps-13-7390.homenet.telecomitalia.it (host-79-33-123-6.retail.telecomitalia.it. [79.33.123.6]) by smtp.gmail.com with ESMTPSA id g66sm1956568wmg.37.2020.11.04.04.06.14 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Nov 2020 04:06:14 -0800 (PST) From: Andrea Righi To: kernel-team@lists.ubuntu.com Subject: [SRU][G/aws][PATCH 2/2] PM: hibernate: Batch hibernate and resume IO requests Date: Wed, 4 Nov 2020 13:05:54 +0100 Message-Id: <20201104120554.944255-3-andrea.righi@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201104120554.944255-1-andrea.righi@canonical.com> References: <20201104120554.944255-1-andrea.righi@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiaoyi Chen BugLink: https://bugs.launchpad.net/bugs/1902864 Hibernate and resume process submits individual IO requests for each page of the data, so use blk_plug to improve the batching of these requests. Testing this change with hibernate and resumes consistently shows merging of the IO requests and more than an order of magnitude improvement in hibernate and resume speed is observed. One hibernate and resume cycle for 16GB RAM out of 32GB in use takes around 21 minutes before the change, and 1 minutes after the change on a system with limited storage IOPS. Signed-off-by: Xiaoyi Chen Co-Developed-by: Anchal Agarwal Signed-off-by: Anchal Agarwal [ rjw: Subject and changelog edits, white space damage fixes ] Signed-off-by: Rafael J. Wysocki (cherry picked from commit 55c4478a8f0ecedc0c1a0c9379380249985c372a) Signed-off-by: Andrea Righi --- kernel/power/swap.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/kernel/power/swap.c b/kernel/power/swap.c index 01e2858b5fe3..116320a0394d 100644 --- a/kernel/power/swap.c +++ b/kernel/power/swap.c @@ -226,6 +226,7 @@ struct hib_bio_batch { atomic_t count; wait_queue_head_t wait; blk_status_t error; + struct blk_plug plug; }; static void hib_init_batch(struct hib_bio_batch *hb) @@ -233,6 +234,12 @@ static void hib_init_batch(struct hib_bio_batch *hb) atomic_set(&hb->count, 0); init_waitqueue_head(&hb->wait); hb->error = BLK_STS_OK; + blk_start_plug(&hb->plug); +} + +static void hib_finish_batch(struct hib_bio_batch *hb) +{ + blk_finish_plug(&hb->plug); } static void hib_end_io(struct bio *bio) @@ -294,6 +301,10 @@ static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr, static blk_status_t hib_wait_io(struct hib_bio_batch *hb) { + /* + * We are relying on the behavior of blk_plug that a thread with + * a plug will flush the plug list before sleeping. + */ wait_event(hb->wait, atomic_read(&hb->count) == 0); return blk_status_to_errno(hb->error); } @@ -561,6 +572,7 @@ static int save_image(struct swap_map_handle *handle, nr_pages++; } err2 = hib_wait_io(&hb); + hib_finish_batch(&hb); stop = ktime_get(); if (!ret) ret = err2; @@ -854,6 +866,7 @@ static int save_image_lzo(struct swap_map_handle *handle, pr_info("Image saving done\n"); swsusp_show_speed(start, stop, nr_to_write, "Wrote"); out_clean: + hib_finish_batch(&hb); if (crc) { if (crc->thr) kthread_stop(crc->thr); @@ -1084,6 +1097,7 @@ static int load_image(struct swap_map_handle *handle, nr_pages++; } err2 = hib_wait_io(&hb); + hib_finish_batch(&hb); stop = ktime_get(); if (!ret) ret = err2; @@ -1447,6 +1461,7 @@ static int load_image_lzo(struct swap_map_handle *handle, } swsusp_show_speed(start, stop, nr_to_read, "Read"); out_clean: + hib_finish_batch(&hb); for (i = 0; i < ring_size; i++) free_page((unsigned long)page[i]); if (crc) {