From patchwork Wed Jul 13 08:54:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 647781 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3rqCMW0smkz9t18 for ; Wed, 13 Jul 2016 18:55:03 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=hZ7UjI2m; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752148AbcGMIyk (ORCPT ); Wed, 13 Jul 2016 04:54:40 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:33531 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753073AbcGMIyc (ORCPT ); Wed, 13 Jul 2016 04:54:32 -0400 Received: by mail-pf0-f195.google.com with SMTP id i6so2437646pfe.0 for ; Wed, 13 Jul 2016 01:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=RWLWvVh3UOFHxekkcsSzPcZjaO0S43JsEji6P0Fc8/A=; b=hZ7UjI2msM/8DlCQAzFjw3oSFwgGPJpf4kv75EPJuO/R+jk7P98o+J6R9MNBWaVjtc 1G+MuCTemA85IZXprn2vHdcA/ID0gMsxaHV/OWeHsy3lj3N1E71vEkRpU+vV2fJvkENA hemVMx4ndCIEDkQXymbdJ/Ups1kmNKdEhj+ynlMPUKSbQ3MNYkvHxbRVKZz4nI7ah3xE RtZ5YbgOuWGzYD9oxq8moHeRS41g3FSxHiDbwPFGjas+OMTWJ2Ll/32+jR4UBmAjLqgN GpWrnRVWFmdB7BibpBxtWylq8H0VhVi5O+v1C5tTj8L4w6iIZaAaTeuxZCTMs/6Nae8U bDvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=RWLWvVh3UOFHxekkcsSzPcZjaO0S43JsEji6P0Fc8/A=; b=eOXH6y2ziGqgO51l1RqjnhSpqPkI4MIdD+nl1O+6tquuRD4unxz8crnLq/g1/pu5Xx Jd/R5QNOI4pKaZCdjT0kHhShDLLxd3av5pheiADm3gioFUPA35t8O1lN+SAMjQ60zrUO LHwHuYQ7hWMXXSJ+O6ZhspM/CwTrGBCVSfo8e5XHZghW8Yxg/PGUIcXZG7ipI5p/1BDR KGaPvwpWQ82DjkSYPKe6UQpOep98MOD7wWBgVoIwjPXnMqvsCXr9k5fDxEvGGJ3zTfII MgHYsc8Pds4KUm7vbYzWCUXwkaGXNYsneRAZCwsONfRE//4JoVsz1arK+Pi20sM5Tgog Pejw== X-Gm-Message-State: ALyK8tI2q2d4ngaIbVbFO2+VeWW0tuSC2tWLwYui/mI9dpxy0GI42XU2XzNjYRKq50vceg== X-Received: by 10.98.49.133 with SMTP id x127mr1519297pfx.90.1468400066056; Wed, 13 Jul 2016 01:54:26 -0700 (PDT) Received: from dyn253.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id o68sm2251153pfb.18.2016.07.13.01.54.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 13 Jul 2016 01:54:25 -0700 (PDT) From: Suraj Jitindar Singh To: linuxppc-dev@lists.ozlabs.org Cc: sjitindarsingh@gmail.com, pbonzini@redhat.com, rkrcmar@redhat.com, agraf@suse.com, benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au, kvm-ppc@vger.kernel.org Subject: [PATCH V3 5/5] powerpc/kvm/stats: Implement existing and add new halt polling vcpu stats Date: Wed, 13 Jul 2016 18:54:13 +1000 Message-Id: <1468400053-5115-1-git-send-email-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.5.5 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org vcpu stats are used to collect information about a vcpu which can be viewed in the debugfs. For example halt_attempted_poll and halt_successful_poll are used to keep track of the number of times the vcpu attempts to and successfully polls. These stats are currently not used on powerpc. Implement incrementation of the halt_attempted_poll and halt_successful_poll vcpu stats for powerpc. Since these stats are summed over all the vcpus for a running guest it doesn't matter which vcpu they are attributed to, thus we choose the current runner vcpu of the vcore. Also add new vcpu stats: halt_block_time_successful_poll and halt_block_time_waited to be used to accumulate the total time spend blocked when we either successfully polled, or unsuccessfully polled and then waited respectively, and halt_successful_wait to accumulate the number of times the vcpu waited. Given that halt_poll_time and halt_wait_time are expressed in nanoseconds it is necessary to represent these as 64-bit quantities, otherwise they would overflow after only about 4 seconds. Given that the total time spend either successfully polling or unsuccessfully polling and then waiting will be known, and the number of times that each was done, it will be possible to determine the average block time of the vcore when it either successfully polls or unsuccessfully polls and then waits. This will give the ability to tune the kvm module parameters based on the calculated average block time in each scenario. --- Change Log: V1 -> V2: - Nothing V2 -> V3: - Rename vcpu stats to express with greater clarity what they represent: halt_poll_time -> halt_block_time_successful_poll halt_wait_time -> halt_block_time_waited Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/include/asm/kvm_host.h | 3 +++ arch/powerpc/kvm/book3s.c | 5 +++++ arch/powerpc/kvm/book3s_hv.c | 16 ++++++++++++++-- 3 files changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 5c52a9f..c0e16d4 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -114,8 +114,11 @@ struct kvm_vcpu_stat { u64 emulated_inst_exits; u64 dec_exits; u64 ext_intr_exits; + u64 halt_block_time_successful_poll; + u64 halt_block_time_waited; u64 halt_successful_poll; u64 halt_attempted_poll; + u64 halt_successful_wait; u64 halt_poll_invalid; u64 halt_wakeup; u64 dbell_exits; diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 47018fc..0c1295b 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -52,8 +52,13 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { "dec", VCPU_STAT(dec_exits) }, { "ext_intr", VCPU_STAT(ext_intr_exits) }, { "queue_intr", VCPU_STAT(queue_intr) }, + { "halt_block_time_successful_poll_ns", + VCPU_STAT(halt_block_time_successful_poll) }, + { "halt_block_time_waited_ns", + VCPU_STAT(halt_block_time_waited) }, { "halt_successful_poll", VCPU_STAT(halt_successful_poll), }, { "halt_attempted_poll", VCPU_STAT(halt_attempted_poll), }, + { "halt_successful_wait", VCPU_STAT(halt_successful_wait) }, { "halt_poll_invalid", VCPU_STAT(halt_poll_invalid) }, { "halt_wakeup", VCPU_STAT(halt_wakeup) }, { "pf_storage", VCPU_STAT(pf_storage) }, diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0d8ce14..975a757 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2679,8 +2679,8 @@ static int kvmppc_vcore_check_block(struct kvmppc_vcore *vc) */ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) { - int do_sleep = 1; ktime_t cur, start; + int do_sleep = 1; u64 block_ns; DECLARE_SWAITQUEUE(wait); @@ -2688,6 +2688,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) cur = start = ktime_get(); if (vc->halt_poll_ns) { ktime_t stop = ktime_add_ns(start, vc->halt_poll_ns); + ++vc->runner->stat.halt_attempted_poll; vc->vcore_state = VCORE_POLLING; spin_unlock(&vc->lock); @@ -2703,8 +2704,10 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; - if (!do_sleep) + if (!do_sleep) { + ++vc->runner->stat.halt_successful_poll; goto out; + } } prepare_to_swait(&vc->wq, &wait, TASK_INTERRUPTIBLE); @@ -2712,6 +2715,9 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) if (kvmppc_vcore_check_block(vc)) { finish_swait(&vc->wq, &wait); do_sleep = 0; + /* If we polled, count this as a successful poll */ + if (vc->halt_poll_ns) + ++vc->runner->stat.halt_successful_poll; goto out; } @@ -2723,12 +2729,18 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; trace_kvmppc_vcore_blocked(vc, 1); + ++vc->runner->stat.halt_successful_wait; cur = ktime_get(); out: block_ns = ktime_to_ns(cur) - ktime_to_ns(start); + if (do_sleep) + vc->runner->stat.halt_block_time_waited += block_ns; + else if (vc->halt_poll_ns) + vc->runner->stat.halt_block_time_successful_poll += block_ns; + if (halt_poll_max_ns) { if (block_ns <= vc->halt_poll_ns) ;