From patchwork Fri Jul 22 03:41:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 651500 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3rwbzq6L2xz9t1g for ; Fri, 22 Jul 2016 13:41:43 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=fe+6CrVm; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752219AbcGVDln (ORCPT ); Thu, 21 Jul 2016 23:41:43 -0400 Received: from mail-pa0-f66.google.com ([209.85.220.66]:33581 "EHLO mail-pa0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751375AbcGVDlm (ORCPT ); Thu, 21 Jul 2016 23:41:42 -0400 Received: by mail-pa0-f66.google.com with SMTP id q2so6230125pap.0 for ; Thu, 21 Jul 2016 20:41:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+Sa8Ptvh79pbvT4HsS1UfuWfJLN1CkWNIB+nnIoOh1M=; b=fe+6CrVmisLaDCoFa5lr5RjzHOY5Bofp3/t7kKxKiijqeHASivvWSrOZyRJBnj3Pbj EHz/8O+J3YrtF8xG79XMLcmsQluaYbBfpnGvl8PHqqPTTXxmfWGy+anpKpOSK7/+STIZ yXiW80FEd9vAOeCMEQlLmG9Zp9PLzmAZDdCODtC5Xm9t9WXdiNfRhgULL+wuL4PFl32l fVpcHCYDMZzoPxBqvcG3bS13dG+KSbgFGNjuMO2qOtSJM45xdh2BTmraxV3vqcNYgMTn A3iA/kXZdixFh9MJVWOTJxy74cicamW05sCsKRJqLs+Tm1RKKi/CFvmQgPzAOVgvVhik Ad+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+Sa8Ptvh79pbvT4HsS1UfuWfJLN1CkWNIB+nnIoOh1M=; b=GvTl+eceP9cwkm6dz4K7AuxDnEQw17+FS/ad5QOub2r5DLyUI+oXZd6hcFaB8FFHD+ cTZFGBOdmWuAufA2oqIZuz0JShyI8dqmMBDw2jvIUnxZ0Vi1ZtjXBMk1pwS0nOuriFNl TbUyY3gxa82g2sf0801nTR27zwiEuCjVSBlx1EpJT9VjrIOcJXtl9zkVwjP35gK8rXqY HHUdHub9do5NAJpMBmpJ6Olzkc5z69O24DVJNvYTIkOe6XEGNmUVsUf/SI0Qiqfd6Kgy wpUWoQj9foahTouAgt9F/ks1o1J7WcQiTwsgTX5bxPrIlj/EIYqKodqO8THxR3utYPzX CObw== X-Gm-Message-State: AEkoouup4crF9IV8yIoBPRUCZxPTTvJZ/a7p7o1h8/KIYYEFdDSVO1xGTXClom0vnJtwjA== X-Received: by 10.66.101.101 with SMTP id ff5mr2734208pab.88.1469158902018; Thu, 21 Jul 2016 20:41:42 -0700 (PDT) Received: from dyn253.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id d5sm15536969pfa.44.2016.07.21.20.41.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Jul 2016 20:41:41 -0700 (PDT) From: Suraj Jitindar Singh To: linuxppc-dev@lists.ozlabs.org Cc: sjitindarsingh@gmail.com, kvm-ppc@vger.kernel.org, mpe@ellerman.id.au, paulus@samba.org, benh@kernel.crashing.org, pbonzini@redhat.com, agraf@suse.com, rkrcmar@redhat.com, dmatlack@google.com, borntraeger@de.ibm.com Subject: [PATCH V5 5/5] powerpc/kvm/stats: Implement existing and add new halt polling vcpu stats Date: Fri, 22 Jul 2016 13:41:03 +1000 Message-Id: <1469158863-28518-5-git-send-email-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1469158863-28518-1-git-send-email-sjitindarsingh@gmail.com> References: <1469158863-28518-1-git-send-email-sjitindarsingh@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org vcpu stats are used to collect information about a vcpu which can be viewed in the debugfs. For example halt_attempted_poll and halt_successful_poll are used to keep track of the number of times the vcpu attempts to and successfully polls. These stats are currently not used on powerpc. Implement incrementation of the halt_attempted_poll and halt_successful_poll vcpu stats for powerpc. Since these stats are summed over all the vcpus for all running guests it doesn't matter which vcpu they are attributed to, thus we choose the current runner vcpu of the vcore. Also add new vcpu stats: halt_poll_success_ns, halt_poll_fail_ns and halt_wait_ns to be used to accumulate the total time spend polling successfully, polling unsuccessfully and waiting respectively, and halt_successful_wait to accumulate the number of times the vcpu waits. Given that halt_poll_success_ns, halt_poll_fail_ns and halt_wait_ns are expressed in nanoseconds it is necessary to represent these as 64-bit quantities, otherwise they would overflow after only about 4 seconds. Given that the total time spend either polling or waiting will be known and the number of times that each was done, it will be possible to determine the average poll and wait times. This will give the ability to tune the kvm module parameters based on the calculated average wait and poll times. Signed-off-by: Suraj Jitindar Singh Reviewed-by: David Matlack --- Change Log: V3 -> V4: - Instead of accounting just wait and poll time, separate these into successful_poll_time, failed_poll_time and wait_time. V4 -> V5: - Add single_task_running() check to polling loop --- arch/powerpc/include/asm/kvm_host.h | 4 ++++ arch/powerpc/kvm/book3s.c | 4 ++++ arch/powerpc/kvm/book3s_hv.c | 38 +++++++++++++++++++++++++++++++------ 3 files changed, 40 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index f6304c5..f15ffc0 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -114,8 +114,12 @@ struct kvm_vcpu_stat { u64 emulated_inst_exits; u64 dec_exits; u64 ext_intr_exits; + u64 halt_poll_success_ns; + u64 halt_poll_fail_ns; + u64 halt_wait_ns; u64 halt_successful_poll; u64 halt_attempted_poll; + u64 halt_successful_wait; u64 halt_poll_invalid; u64 halt_wakeup; u64 dbell_exits; diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 47018fc..71eb8f3 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -52,8 +52,12 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { "dec", VCPU_STAT(dec_exits) }, { "ext_intr", VCPU_STAT(ext_intr_exits) }, { "queue_intr", VCPU_STAT(queue_intr) }, + { "halt_poll_success_ns", VCPU_STAT(halt_poll_success_ns) }, + { "halt_poll_fail_ns", VCPU_STAT(halt_poll_fail_ns) }, + { "halt_wait_ns", VCPU_STAT(halt_wait_ns) }, { "halt_successful_poll", VCPU_STAT(halt_successful_poll), }, { "halt_attempted_poll", VCPU_STAT(halt_attempted_poll), }, + { "halt_successful_wait", VCPU_STAT(halt_successful_wait) }, { "halt_poll_invalid", VCPU_STAT(halt_poll_invalid) }, { "halt_wakeup", VCPU_STAT(halt_wakeup) }, { "pf_storage", VCPU_STAT(pf_storage) }, diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index a9de1d4..b1d9e88 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2679,15 +2679,16 @@ static int kvmppc_vcore_check_block(struct kvmppc_vcore *vc) */ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) { + ktime_t cur, start_poll, start_wait; int do_sleep = 1; - ktime_t cur, start; u64 block_ns; DECLARE_SWAITQUEUE(wait); /* Poll for pending exceptions and ceded state */ - cur = start = ktime_get(); + cur = start_poll = ktime_get(); if (vc->halt_poll_ns) { - ktime_t stop = ktime_add_ns(start, vc->halt_poll_ns); + ktime_t stop = ktime_add_ns(start_poll, vc->halt_poll_ns); + ++vc->runner->stat.halt_attempted_poll; vc->vcore_state = VCORE_POLLING; spin_unlock(&vc->lock); @@ -2698,13 +2699,15 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) break; } cur = ktime_get(); - } while (ktime_before(cur, stop)); + } while (single_task_running() && ktime_before(cur, stop)); spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; - if (!do_sleep) + if (!do_sleep) { + ++vc->runner->stat.halt_successful_poll; goto out; + } } prepare_to_swait(&vc->wq, &wait, TASK_INTERRUPTIBLE); @@ -2712,9 +2715,14 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) if (kvmppc_vcore_check_block(vc)) { finish_swait(&vc->wq, &wait); do_sleep = 0; + /* If we polled, count this as a successful poll */ + if (vc->halt_poll_ns) + ++vc->runner->stat.halt_successful_poll; goto out; } + start_wait = ktime_get(); + vc->vcore_state = VCORE_SLEEPING; trace_kvmppc_vcore_blocked(vc, 0); spin_unlock(&vc->lock); @@ -2723,11 +2731,29 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; trace_kvmppc_vcore_blocked(vc, 1); + ++vc->runner->stat.halt_successful_wait; cur = ktime_get(); out: - block_ns = ktime_to_ns(cur) - ktime_to_ns(start); + block_ns = ktime_to_ns(cur) - ktime_to_ns(start_poll); + + /* Attribute wait time */ + if (do_sleep) { + vc->runner->stat.halt_wait_ns += + ktime_to_ns(cur) - ktime_to_ns(start_wait); + /* Attribute failed poll time */ + if (vc->halt_poll_ns) + vc->runner->stat.halt_poll_fail_ns += + ktime_to_ns(start_wait) - + ktime_to_ns(start_poll); + } else { + /* Attribute successful poll time */ + if (vc->halt_poll_ns) + vc->runner->stat.halt_poll_success_ns += + ktime_to_ns(cur) - + ktime_to_ns(start_poll); + } /* Adjust poll time */ if (halt_poll_max_ns) {