From patchwork Tue Oct 20 15:40:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 533048 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 7725D1402D2 for ; Wed, 21 Oct 2015 02:40:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751883AbbJTPkO (ORCPT ); Tue, 20 Oct 2015 11:40:14 -0400 Received: from mx1.redhat.com ([209.132.183.28]:21475 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751543AbbJTPkO (ORCPT ); Tue, 20 Oct 2015 11:40:14 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id 0ECB4C1A1658 for ; Tue, 20 Oct 2015 15:40:13 +0000 (UTC) Received: from [10.36.112.17] (ovpn-112-17.ams2.redhat.com [10.36.112.17]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t9KFeBNc025402 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 20 Oct 2015 11:40:13 -0400 Subject: Fwd: Re: [PATCH v3 2/4] KVM: use simple waitqueue for vcpu->wq References: <20151020140031.GG17308@twins.programming.kicks-ass.net> To: "kvm-ppc@vger.kernel.org" From: Paolo Bonzini X-Forwarded-Message-Id: <20151020140031.GG17308@twins.programming.kicks-ass.net> Message-ID: <56266059.1070501@redhat.com> Date: Tue, 20 Oct 2015 17:40:09 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <20151020140031.GG17308@twins.programming.kicks-ass.net> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Hi kvm-ppc folks, there is a remark for you in here... Message ID is 20151020140031.GG17308@twins.programming.kicks-ass.net Thanks, Paolo -------- Forwarded Message -------- Subject: Re: [PATCH v3 2/4] KVM: use simple waitqueue for vcpu->wq Date: Tue, 20 Oct 2015 16:00:31 +0200 From: Peter Zijlstra To: Daniel Wagner CC: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Marcelo Tosatti , Paolo Bonzini , Paul E. McKenney , Paul Gortmaker , Thomas Gleixner On Tue, Oct 20, 2015 at 09:28:08AM +0200, Daniel Wagner wrote: > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c > index 2280497..f534e15 100644 > --- a/arch/powerpc/kvm/book3s_hv.c > +++ b/arch/powerpc/kvm/book3s_hv.c > @@ -2560,10 +2560,9 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) > { > struct kvm_vcpu *vcpu; > int do_sleep = 1; > + DECLARE_SWAITQUEUE(wait); > > - DEFINE_WAIT(wait); > - > - prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE); > + prepare_to_swait(&vc->wq, &wait, TASK_INTERRUPTIBLE); > > /* > * Check one last time for pending exceptions and ceded state after > @@ -2577,7 +2576,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) > } > > if (!do_sleep) { > - finish_wait(&vc->wq, &wait); > + finish_swait(&vc->wq, &wait); > return; > } > > @@ -2585,7 +2584,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) > trace_kvmppc_vcore_blocked(vc, 0); > spin_unlock(&vc->lock); > schedule(); > - finish_wait(&vc->wq, &wait); > + finish_swait(&vc->wq, &wait); > spin_lock(&vc->lock); > vc->vcore_state = VCORE_INACTIVE; > trace_kvmppc_vcore_blocked(vc, 1); This one looks buggy, one should _NOT_ assume that your blocking condition is true after schedule(). > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 8db1d93..45ab55f 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2019,7 +2018,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > } > > for (;;) { > - prepare_to_wait(&vcpu->wq, &wait, TASK_INTERRUPTIBLE); > + prepare_to_swait(&vcpu->wq, &wait, TASK_INTERRUPTIBLE); > > if (kvm_vcpu_check_block(vcpu) < 0) > break; > @@ -2028,7 +2027,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > schedule(); > } > > - finish_wait(&vcpu->wq, &wait); > + finish_swait(&vcpu->wq, &wait); > cur = ktime_get(); > > out: Should we not take this opportunity to get rid of these open-coded wait loops? Does this work? --- arch/powerpc/kvm/book3s_hv.c | 33 +++++++++++++++++---------------- virt/kvm/kvm_main.c | 13 ++----------- 2 files changed, 19 insertions(+), 27 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 228049786888..b5b8bcad5105 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2552,18 +2552,10 @@ static void kvmppc_wait_for_exec(struct kvmppc_vcore *vc, finish_wait(&vcpu->arch.cpu_run, &wait); } -/* - * All the vcpus in this vcore are idle, so wait for a decrementer - * or external interrupt to one of the vcpus. vc->lock is held. - */ -static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) +static inline bool kvmppc_vcore_should_sleep(struct kvmppc_vcore *vc) { struct kvm_vcpu *vcpu; - int do_sleep = 1; - - DEFINE_WAIT(wait); - - prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE); + bool sleep = true; /* * Check one last time for pending exceptions and ceded state after @@ -2571,26 +2563,35 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) */ list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) { if (vcpu->arch.pending_exceptions || !vcpu->arch.ceded) { - do_sleep = 0; + sleep = false; break; } } - if (!do_sleep) { - finish_wait(&vc->wq, &wait); - return; - } + return sleep; +} +static inline void kvmppc_vcore_schedule(struct kvmppc_vcore *vc) +{ vc->vcore_state = VCORE_SLEEPING; trace_kvmppc_vcore_blocked(vc, 0); spin_unlock(&vc->lock); schedule(); - finish_wait(&vc->wq, &wait); spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; trace_kvmppc_vcore_blocked(vc, 1); } +/* + * All the vcpus in this vcore are idle, so wait for a decrementer + * or external interrupt to one of the vcpus. vc->lock is held. + */ +static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) +{ + ___wait_event(vc->wq, !kvmppc_vcore_should_sleep(vc), TASK_IDLE, 0, 0, + kvmppc_vcore_schedule(vc)); +} + static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) { int n_ceded; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8db1d9361993..488f00d79059 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1996,7 +1996,6 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) void kvm_vcpu_block(struct kvm_vcpu *vcpu) { ktime_t start, cur; - DEFINE_WAIT(wait); bool waited = false; u64 block_ns; @@ -2018,17 +2017,9 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) } while (single_task_running() && ktime_before(cur, stop)); } - for (;;) { - prepare_to_wait(&vcpu->wq, &wait, TASK_INTERRUPTIBLE); + ___wait_event(vcpu->wq, kvm_cpu_check_block(vcpu) < 0, TASK_IDLE, 0, 0, + waited = true; schedule()); - if (kvm_vcpu_check_block(vcpu) < 0) - break; - - waited = true; - schedule(); - } - - finish_wait(&vcpu->wq, &wait); cur = ktime_get(); out: