diff mbox

[13/50] KVM: PPC: booke: check for signals in kvmppc_vcpu_run

Message ID 1325639448-9494-14-git-send-email-agraf@suse.de
State New, archived
Headers show

Commit Message

Alexander Graf Jan. 4, 2012, 1:10 a.m. UTC
From: Scott Wood <scottwood@freescale.com>

Currently we check prior to returning from a lightweight exit,
but not prior to initial entry.

book3s already does a similar test.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/booke.c |   10 +++++++++-
 1 files changed, 9 insertions(+), 1 deletions(-)

Comments

Avi Kivity Jan. 8, 2012, 1:18 p.m. UTC | #1
On 01/04/2012 03:10 AM, Alexander Graf wrote:
> From: Scott Wood <scottwood@freescale.com>
>
> Currently we check prior to returning from a lightweight exit,
> but not prior to initial entry.
>
> book3s already does a similar test.
>
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/kvm/booke.c |   10 +++++++++-
>  1 files changed, 9 insertions(+), 1 deletions(-)
>
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index b642200..9c78589 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -322,11 +322,19 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  	}
>  
>  	local_irq_disable();
> +
> +	if (signal_pending(current)) {
> +		kvm_run->exit_reason = KVM_EXIT_INTR;
> +		ret = -EINTR;
> +		goto out;
> +	}
> +
>  	kvm_guest_enter();
>  	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>  	kvm_guest_exit();
> -	local_irq_enable();
>  
>

In general a single check prior to entry is sufficient (well, in
addition to the one in kvm_vcpu_block()).
Alexander Graf Jan. 8, 2012, 3:11 p.m. UTC | #2
On 08.01.2012, at 14:18, Avi Kivity wrote:

> On 01/04/2012 03:10 AM, Alexander Graf wrote:
>> From: Scott Wood <scottwood@freescale.com>
>> 
>> Currently we check prior to returning from a lightweight exit,
>> but not prior to initial entry.
>> 
>> book3s already does a similar test.
>> 
>> Signed-off-by: Scott Wood <scottwood@freescale.com>
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>> ---
>> arch/powerpc/kvm/booke.c |   10 +++++++++-
>> 1 files changed, 9 insertions(+), 1 deletions(-)
>> 
>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>> index b642200..9c78589 100644
>> --- a/arch/powerpc/kvm/booke.c
>> +++ b/arch/powerpc/kvm/booke.c
>> @@ -322,11 +322,19 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>> 	}
>> 
>> 	local_irq_disable();
>> +
>> +	if (signal_pending(current)) {
>> +		kvm_run->exit_reason = KVM_EXIT_INTR;
>> +		ret = -EINTR;
>> +		goto out;
>> +	}
>> +
>> 	kvm_guest_enter();
>> 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>> 	kvm_guest_exit();
>> -	local_irq_enable();
>> 
>> 
> 
> In general a single check prior to entry is sufficient (well, in
> addition to the one in kvm_vcpu_block()).

Yes, and IIUC this is the single check prior to entry. On lightweight exit, we don't return from __kvmppc_vcpu_run, but only call kvmppc_handle_exit() and if that exits that we return to the guest, we stay inside of __kvmppc_vcpu_run and don't return from here.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity Jan. 8, 2012, 3:22 p.m. UTC | #3
On 01/08/2012 05:11 PM, Alexander Graf wrote:
> On 08.01.2012, at 14:18, Avi Kivity wrote:
>
> > On 01/04/2012 03:10 AM, Alexander Graf wrote:
> >> From: Scott Wood <scottwood@freescale.com>
> >> 
> >> Currently we check prior to returning from a lightweight exit,
> >> but not prior to initial entry.
> >> 
> >> book3s already does a similar test.
> >> 
> >> Signed-off-by: Scott Wood <scottwood@freescale.com>
> >> Signed-off-by: Alexander Graf <agraf@suse.de>
> >> ---
> >> arch/powerpc/kvm/booke.c |   10 +++++++++-
> >> 1 files changed, 9 insertions(+), 1 deletions(-)
> >> 
> >> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> >> index b642200..9c78589 100644
> >> --- a/arch/powerpc/kvm/booke.c
> >> +++ b/arch/powerpc/kvm/booke.c
> >> @@ -322,11 +322,19 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> >> 	}
> >> 
> >> 	local_irq_disable();
> >> +
> >> +	if (signal_pending(current)) {
> >> +		kvm_run->exit_reason = KVM_EXIT_INTR;
> >> +		ret = -EINTR;
> >> +		goto out;
> >> +	}
> >> +
> >> 	kvm_guest_enter();
> >> 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
> >> 	kvm_guest_exit();
> >> -	local_irq_enable();
> >> 
> >> 
> > 
> > In general a single check prior to entry is sufficient (well, in
> > addition to the one in kvm_vcpu_block()).
>
> Yes, and IIUC this is the single check prior to entry. On lightweight exit, we don't return from __kvmppc_vcpu_run, but only call kvmppc_handle_exit() and if that exits that we return to the guest, we stay inside of __kvmppc_vcpu_run and don't return from here.

It means you check twice per heavyweight exit, no?  Once here, and once
when kvmppc_handle_exit() returns.  If, instead, you move the check to
just before the lightweight entry, you check just once per entry, like x86.
Alexander Graf Jan. 8, 2012, 3:37 p.m. UTC | #4
On 08.01.2012, at 16:22, Avi Kivity wrote:

> On 01/08/2012 05:11 PM, Alexander Graf wrote:
>> On 08.01.2012, at 14:18, Avi Kivity wrote:
>> 
>>> On 01/04/2012 03:10 AM, Alexander Graf wrote:
>>>> From: Scott Wood <scottwood@freescale.com>
>>>> 
>>>> Currently we check prior to returning from a lightweight exit,
>>>> but not prior to initial entry.
>>>> 
>>>> book3s already does a similar test.
>>>> 
>>>> Signed-off-by: Scott Wood <scottwood@freescale.com>
>>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>> ---
>>>> arch/powerpc/kvm/booke.c |   10 +++++++++-
>>>> 1 files changed, 9 insertions(+), 1 deletions(-)
>>>> 
>>>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>>>> index b642200..9c78589 100644
>>>> --- a/arch/powerpc/kvm/booke.c
>>>> +++ b/arch/powerpc/kvm/booke.c
>>>> @@ -322,11 +322,19 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>>>> 	}
>>>> 
>>>> 	local_irq_disable();
>>>> +
>>>> +	if (signal_pending(current)) {
>>>> +		kvm_run->exit_reason = KVM_EXIT_INTR;
>>>> +		ret = -EINTR;
>>>> +		goto out;
>>>> +	}
>>>> +
>>>> 	kvm_guest_enter();
>>>> 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>>>> 	kvm_guest_exit();
>>>> -	local_irq_enable();
>>>> 
>>>> 
>>> 
>>> In general a single check prior to entry is sufficient (well, in
>>> addition to the one in kvm_vcpu_block()).
>> 
>> Yes, and IIUC this is the single check prior to entry. On lightweight exit, we don't return from __kvmppc_vcpu_run, but only call kvmppc_handle_exit() and if that exits that we return to the guest, we stay inside of __kvmppc_vcpu_run and don't return from here.
> 
> It means you check twice per heavyweight exit, no?  Once here, and once
> when kvmppc_handle_exit() returns.  If, instead, you move the check to
> just before the lightweight entry, you check just once per entry, like x86.

You mean we check twice in case a heavyweight exit occurs right after another heavyweight exit? We need to check whether a signal is pending to determine heavyweight exits, so we definitely have to check at the end of the exit handlers:

  - check signal (none pending)
  - enter guest
  - exit guest because an external interrupt occurs (lightweight)
  - check signal (pending now because we got preempted)
  -> make exit heavyweight

If however there already is a signal pending before we enter the guest and the guest is running an endless loop and the host doesn't have any timers configured because it's just waiting for the signal to be handled, we would be in a dead loop. So we have to check before entry either way.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity Jan. 8, 2012, 4:13 p.m. UTC | #5
On 01/08/2012 05:37 PM, Alexander Graf wrote:
> On 08.01.2012, at 16:22, Avi Kivity wrote:
>
> > On 01/08/2012 05:11 PM, Alexander Graf wrote:
> >> On 08.01.2012, at 14:18, Avi Kivity wrote:
> >> 
> >>> On 01/04/2012 03:10 AM, Alexander Graf wrote:
> >>>> From: Scott Wood <scottwood@freescale.com>
> >>>> 
> >>>> Currently we check prior to returning from a lightweight exit,
> >>>> but not prior to initial entry.
> >>>> 
> >>>> book3s already does a similar test.
> >>>> 
> >>>> Signed-off-by: Scott Wood <scottwood@freescale.com>
> >>>> Signed-off-by: Alexander Graf <agraf@suse.de>
> >>>> ---
> >>>> arch/powerpc/kvm/booke.c |   10 +++++++++-
> >>>> 1 files changed, 9 insertions(+), 1 deletions(-)
> >>>> 
> >>>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> >>>> index b642200..9c78589 100644
> >>>> --- a/arch/powerpc/kvm/booke.c
> >>>> +++ b/arch/powerpc/kvm/booke.c
> >>>> @@ -322,11 +322,19 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> >>>> 	}
> >>>> 
> >>>> 	local_irq_disable();
> >>>> +
> >>>> +	if (signal_pending(current)) {
> >>>> +		kvm_run->exit_reason = KVM_EXIT_INTR;
> >>>> +		ret = -EINTR;
> >>>> +		goto out;
> >>>> +	}
> >>>> +
> >>>> 	kvm_guest_enter();
> >>>> 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
> >>>> 	kvm_guest_exit();
> >>>> -	local_irq_enable();
> >>>> 
> >>>> 
> >>> 
> >>> In general a single check prior to entry is sufficient (well, in
> >>> addition to the one in kvm_vcpu_block()).
> >> 
> >> Yes, and IIUC this is the single check prior to entry. On lightweight exit, we don't return from __kvmppc_vcpu_run, but only call kvmppc_handle_exit() and if that exits that we return to the guest, we stay inside of __kvmppc_vcpu_run and don't return from here.
> > 
> > It means you check twice per heavyweight exit, no?  Once here, and once
> > when kvmppc_handle_exit() returns.  If, instead, you move the check to
> > just before the lightweight entry, you check just once per entry, like x86.
>
> You mean we check twice in case a heavyweight exit occurs right after another heavyweight exit? We need to check whether a signal is pending to determine heavyweight exits, so we definitely have to check at the end of the exit handlers:
>
>   - check signal (none pending)
>   - enter guest
>   - exit guest because an external interrupt occurs (lightweight)
>   - check signal (pending now because we got preempted)
>   -> make exit heavyweight
>
> If however there already is a signal pending before we enter the guest and the guest is running an endless loop and the host doesn't have any timers configured because it's just waiting for the signal to be handled, we would be in a dead loop. So we have to check before entry either way.

Yes, but you're checking in the wrong place.

What you have now is:

   if (!signal_pending())
         do
              enter guest
         while (!signal_pending && is_lightweight_exit)

What x86 does is

   while (!signal_pending() && can_reenter)
           can_reenter = enter guest

well, that's not actually what x86 does, but what it should be doing. 
The point is checking for signals after an exit is meaningless.  You've
exited, so the guest can't be holding off a signal for the host.  The
check after the exit is really meant for the next entry, but it doesn't
cover the first entry, so you have another check before that.
Alexander Graf Jan. 8, 2012, 4:21 p.m. UTC | #6
On 08.01.2012, at 17:13, Avi Kivity wrote:

> On 01/08/2012 05:37 PM, Alexander Graf wrote:
>> On 08.01.2012, at 16:22, Avi Kivity wrote:
>> 
>>> On 01/08/2012 05:11 PM, Alexander Graf wrote:
>>>> On 08.01.2012, at 14:18, Avi Kivity wrote:
>>>> 
>>>>> On 01/04/2012 03:10 AM, Alexander Graf wrote:
>>>>>> From: Scott Wood <scottwood@freescale.com>
>>>>>> 
>>>>>> Currently we check prior to returning from a lightweight exit,
>>>>>> but not prior to initial entry.
>>>>>> 
>>>>>> book3s already does a similar test.
>>>>>> 
>>>>>> Signed-off-by: Scott Wood <scottwood@freescale.com>
>>>>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>>>> ---
>>>>>> arch/powerpc/kvm/booke.c |   10 +++++++++-
>>>>>> 1 files changed, 9 insertions(+), 1 deletions(-)
>>>>>> 
>>>>>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>>>>>> index b642200..9c78589 100644
>>>>>> --- a/arch/powerpc/kvm/booke.c
>>>>>> +++ b/arch/powerpc/kvm/booke.c
>>>>>> @@ -322,11 +322,19 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>>>>>> 	}
>>>>>> 
>>>>>> 	local_irq_disable();
>>>>>> +
>>>>>> +	if (signal_pending(current)) {
>>>>>> +		kvm_run->exit_reason = KVM_EXIT_INTR;
>>>>>> +		ret = -EINTR;
>>>>>> +		goto out;
>>>>>> +	}
>>>>>> +
>>>>>> 	kvm_guest_enter();
>>>>>> 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>>>>>> 	kvm_guest_exit();
>>>>>> -	local_irq_enable();
>>>>>> 
>>>>>> 
>>>>> 
>>>>> In general a single check prior to entry is sufficient (well, in
>>>>> addition to the one in kvm_vcpu_block()).
>>>> 
>>>> Yes, and IIUC this is the single check prior to entry. On lightweight exit, we don't return from __kvmppc_vcpu_run, but only call kvmppc_handle_exit() and if that exits that we return to the guest, we stay inside of __kvmppc_vcpu_run and don't return from here.
>>> 
>>> It means you check twice per heavyweight exit, no?  Once here, and once
>>> when kvmppc_handle_exit() returns.  If, instead, you move the check to
>>> just before the lightweight entry, you check just once per entry, like x86.
>> 
>> You mean we check twice in case a heavyweight exit occurs right after another heavyweight exit? We need to check whether a signal is pending to determine heavyweight exits, so we definitely have to check at the end of the exit handlers:
>> 
>>  - check signal (none pending)
>>  - enter guest
>>  - exit guest because an external interrupt occurs (lightweight)
>>  - check signal (pending now because we got preempted)
>>  -> make exit heavyweight
>> 
>> If however there already is a signal pending before we enter the guest and the guest is running an endless loop and the host doesn't have any timers configured because it's just waiting for the signal to be handled, we would be in a dead loop. So we have to check before entry either way.
> 
> Yes, but you're checking in the wrong place.
> 
> What you have now is:
> 
>   if (!signal_pending())
>         do
>              enter guest
>         while (!signal_pending && is_lightweight_exit)

Well, what we really do is

if (!signal_pending())
    do
        run guest
        if (signal_pending())
            is_lightweight_exit = 0;
    while is_lightweight_exit

but yes :).

> 
> What x86 does is
> 
>   while (!signal_pending() && can_reenter)
>           can_reenter = enter guest
> 
> well, that's not actually what x86 does, but what it should be doing. 
> The point is checking for signals after an exit is meaningless.  You've
> exited, so the guest can't be holding off a signal for the host.  The
> check after the exit is really meant for the next entry, but it doesn't
> cover the first entry, so you have another check before that.

Yes, but the end of the exit handler is basically the beginning of the entry loop. If we already know that we're in a heavy weight exit, we don't check for signals anymore:

        if (!(r & RESUME_HOST)) {
                /* To avoid clobbering exit_reason, only check for signals if
                 * we aren't already exiting to userspace for some other
                 * reason. */
                if (signal_pending(current)) {
                        run->exit_reason = KVM_EXIT_INTR;
                        r = (-EINTR << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
                        kvmppc_account_exit(vcpu, SIGNAL_EXITS);
                }
        }

That way we do basically what you're suggesting, but are a bit more explicit, having checking code at the beginning and end and some conditional checking if it's required or not.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity Jan. 8, 2012, 4:26 p.m. UTC | #7
On 01/08/2012 06:21 PM, Alexander Graf wrote:
> > 
> > well, that's not actually what x86 does, but what it should be doing. 
> > The point is checking for signals after an exit is meaningless.  You've
> > exited, so the guest can't be holding off a signal for the host.  The
> > check after the exit is really meant for the next entry, but it doesn't
> > cover the first entry, so you have another check before that.
>
> Yes, but the end of the exit handler is basically the beginning of the entry loop. If we already know that we're in a heavy weight exit, we don't check for signals anymore:
>
>         if (!(r & RESUME_HOST)) {
>                 /* To avoid clobbering exit_reason, only check for signals if
>                  * we aren't already exiting to userspace for some other
>                  * reason. */
>                 if (signal_pending(current)) {
>                         run->exit_reason = KVM_EXIT_INTR;
>                         r = (-EINTR << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
>                         kvmppc_account_exit(vcpu, SIGNAL_EXITS);
>                 }
>         }
>
> That way we do basically what you're suggesting, but are a bit more explicit, having checking code at the beginning and end and some conditional checking if it's required or not.
>
>

What I'm thinking of is unifying all of this.  But ppc driving the exit
handler from assembly doesn't allow this.

The goal is not just to save code, but to make sure that the various
assumptions that generic code (like a future kvm_vcpu_kick()) makes are
satisfied implicitly, by having just one copy.
Alexander Graf Jan. 8, 2012, 4:41 p.m. UTC | #8
On 08.01.2012, at 17:26, Avi Kivity wrote:

> On 01/08/2012 06:21 PM, Alexander Graf wrote:
>>> 
>>> well, that's not actually what x86 does, but what it should be doing. 
>>> The point is checking for signals after an exit is meaningless.  You've
>>> exited, so the guest can't be holding off a signal for the host.  The
>>> check after the exit is really meant for the next entry, but it doesn't
>>> cover the first entry, so you have another check before that.
>> 
>> Yes, but the end of the exit handler is basically the beginning of the entry loop. If we already know that we're in a heavy weight exit, we don't check for signals anymore:
>> 
>>        if (!(r & RESUME_HOST)) {
>>                /* To avoid clobbering exit_reason, only check for signals if
>>                 * we aren't already exiting to userspace for some other
>>                 * reason. */
>>                if (signal_pending(current)) {
>>                        run->exit_reason = KVM_EXIT_INTR;
>>                        r = (-EINTR << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
>>                        kvmppc_account_exit(vcpu, SIGNAL_EXITS);
>>                }
>>        }
>> 
>> That way we do basically what you're suggesting, but are a bit more explicit, having checking code at the beginning and end and some conditional checking if it's required or not.
>> 
>> 
> 
> What I'm thinking of is unifying all of this.  But ppc driving the exit
> handler from assembly doesn't allow this.

Exactly :). The place you're thinking of having the code at is in asm code. And it has to be - the whole point of lightweight exits is that we don't have to restore all registers, which we would have to do if we returned from __kvmppc_vcpu_run. And that's basically what you'd need to keep this in the same code, right?

> The goal is not just to save code, but to make sure that the various
> assumptions that generic code (like a future kvm_vcpu_kick()) makes are
> satisfied implicitly, by having just one copy.

Well, yes. I agree that I'd prefer to have it in one place but I can't think of a way to make this really technically possible by keeping the code optimized. Fortunately there are very few assumptions based on having the code structured in exactly the way you describe above though.

The reason our kvm_vcpu_kick() implementation is slightly different from x86 is certainly not because we're doing the lightweight exits the way we do :).


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity Jan. 8, 2012, 4:45 p.m. UTC | #9
On 01/08/2012 06:41 PM, Alexander Graf wrote:
> On 08.01.2012, at 17:26, Avi Kivity wrote:
>
> > On 01/08/2012 06:21 PM, Alexander Graf wrote:
> >>> 
> >>> well, that's not actually what x86 does, but what it should be doing. 
> >>> The point is checking for signals after an exit is meaningless.  You've
> >>> exited, so the guest can't be holding off a signal for the host.  The
> >>> check after the exit is really meant for the next entry, but it doesn't
> >>> cover the first entry, so you have another check before that.
> >> 
> >> Yes, but the end of the exit handler is basically the beginning of the entry loop. If we already know that we're in a heavy weight exit, we don't check for signals anymore:
> >> 
> >>        if (!(r & RESUME_HOST)) {
> >>                /* To avoid clobbering exit_reason, only check for signals if
> >>                 * we aren't already exiting to userspace for some other
> >>                 * reason. */
> >>                if (signal_pending(current)) {
> >>                        run->exit_reason = KVM_EXIT_INTR;
> >>                        r = (-EINTR << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
> >>                        kvmppc_account_exit(vcpu, SIGNAL_EXITS);
> >>                }
> >>        }
> >> 
> >> That way we do basically what you're suggesting, but are a bit more explicit, having checking code at the beginning and end and some conditional checking if it's required or not.
> >> 
> >> 
> > 
> > What I'm thinking of is unifying all of this.  But ppc driving the exit
> > handler from assembly doesn't allow this.
>
> Exactly :). The place you're thinking of having the code at is in asm code. And it has to be - the whole point of lightweight exits is that we don't have to restore all registers, which we would have to do if we returned from __kvmppc_vcpu_run. And that's basically what you'd need to keep this in the same code, right?

Right.  We could also have common code which is #ifdefed out for those
who don't like it.

>
> > The goal is not just to save code, but to make sure that the various
> > assumptions that generic code (like a future kvm_vcpu_kick()) makes are
> > satisfied implicitly, by having just one copy.
>
> Well, yes. I agree that I'd prefer to have it in one place but I can't think of a way to make this really technically possible by keeping the code optimized. Fortunately there are very few assumptions based on having the code structured in exactly the way you describe above though.
>
> The reason our kvm_vcpu_kick() implementation is slightly different from x86 is certainly not because we're doing the lightweight exits the way we do :).

I'm looking at it from a different angle.  If most of the code is common
the natural place to add a helper is in common code.  If it's not, then
you add it locally.  Eventually (=now) everyone has their own set of
helpers, and it's really hard to unify them because you can't keep track
of all of the assumptions that are embedded deeply within the code.

It' easier to add stuff that it is to delete stuff.
Alexander Graf Jan. 8, 2012, 4:54 p.m. UTC | #10
On 08.01.2012, at 17:45, Avi Kivity wrote:

> On 01/08/2012 06:41 PM, Alexander Graf wrote:
>> On 08.01.2012, at 17:26, Avi Kivity wrote:
>> 
>>> On 01/08/2012 06:21 PM, Alexander Graf wrote:
>>>>> 
>>>>> well, that's not actually what x86 does, but what it should be doing. 
>>>>> The point is checking for signals after an exit is meaningless.  You've
>>>>> exited, so the guest can't be holding off a signal for the host.  The
>>>>> check after the exit is really meant for the next entry, but it doesn't
>>>>> cover the first entry, so you have another check before that.
>>>> 
>>>> Yes, but the end of the exit handler is basically the beginning of the entry loop. If we already know that we're in a heavy weight exit, we don't check for signals anymore:
>>>> 
>>>>       if (!(r & RESUME_HOST)) {
>>>>               /* To avoid clobbering exit_reason, only check for signals if
>>>>                * we aren't already exiting to userspace for some other
>>>>                * reason. */
>>>>               if (signal_pending(current)) {
>>>>                       run->exit_reason = KVM_EXIT_INTR;
>>>>                       r = (-EINTR << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
>>>>                       kvmppc_account_exit(vcpu, SIGNAL_EXITS);
>>>>               }
>>>>       }
>>>> 
>>>> That way we do basically what you're suggesting, but are a bit more explicit, having checking code at the beginning and end and some conditional checking if it's required or not.
>>>> 
>>>> 
>>> 
>>> What I'm thinking of is unifying all of this.  But ppc driving the exit
>>> handler from assembly doesn't allow this.
>> 
>> Exactly :). The place you're thinking of having the code at is in asm code. And it has to be - the whole point of lightweight exits is that we don't have to restore all registers, which we would have to do if we returned from __kvmppc_vcpu_run. And that's basically what you'd need to keep this in the same code, right?
> 
> Right.  We could also have common code which is #ifdefed out for those
> who don't like it.
> 
>> 
>>> The goal is not just to save code, but to make sure that the various
>>> assumptions that generic code (like a future kvm_vcpu_kick()) makes are
>>> satisfied implicitly, by having just one copy.
>> 
>> Well, yes. I agree that I'd prefer to have it in one place but I can't think of a way to make this really technically possible by keeping the code optimized. Fortunately there are very few assumptions based on having the code structured in exactly the way you describe above though.
>> 
>> The reason our kvm_vcpu_kick() implementation is slightly different from x86 is certainly not because we're doing the lightweight exits the way we do :).
> 
> I'm looking at it from a different angle.  If most of the code is common
> the natural place to add a helper is in common code.  If it's not, then
> you add it locally.  Eventually (=now) everyone has their own set of
> helpers, and it's really hard to unify them because you can't keep track
> of all of the assumptions that are embedded deeply within the code.
> 
> It' easier to add stuff that it is to delete stuff.

I like your angle :). The difficult thing however is to actually know which parts are common and which ones aren't. So I think it makes more sense to add things in specific code, but keep in mind to keep it as generic and as close to the other archs. If you then find common code, make it common.

That's basically what we're doing today with the kvm_vcpu_kick() code for example. We now know which parts actually can be shared how.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index b642200..9c78589 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -322,11 +322,19 @@  int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	}
 
 	local_irq_disable();
+
+	if (signal_pending(current)) {
+		kvm_run->exit_reason = KVM_EXIT_INTR;
+		ret = -EINTR;
+		goto out;
+	}
+
 	kvm_guest_enter();
 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
 	kvm_guest_exit();
-	local_irq_enable();
 
+out:
+	local_irq_enable();
 	return ret;
 }