diff mbox

[2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support

Message ID 1308568312-5463-3-git-send-email-alevy@redhat.com
State New
Headers show

Commit Message

Alon Levy June 20, 2011, 11:11 a.m. UTC
Add QXL_IO_UPDATE_MEM. Used to reduce vmexits from the guest when it resets the
spice server state before going to sleep.

The implementation requires an updated spice-server (0.8.2) with the new
worker function update_mem.

Cc: Yonit Halperin <yhalperi@redhat.com>
---
 hw/qxl.c |   26 ++++++++++++++++++++++++++
 1 files changed, 26 insertions(+), 0 deletions(-)

Comments

Gerd Hoffmann June 20, 2011, 12:13 p.m. UTC | #1
Hi,

> +    case QXL_IO_UPDATE_MEM:
> +        switch (val) {
> +        case (QXL_UPDATE_MEM_RENDER_ALL):
> +            d->ssd.worker->update_mem(d->ssd.worker);
> +            break;

What is the difference to one worker->stop() + worker->start() cycle?

cheers,
   Gerd
Alon Levy June 20, 2011, 12:57 p.m. UTC | #2
On Mon, Jun 20, 2011 at 02:13:36PM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >+    case QXL_IO_UPDATE_MEM:
> >+        switch (val) {
> >+        case (QXL_UPDATE_MEM_RENDER_ALL):
> >+            d->ssd.worker->update_mem(d->ssd.worker);
> >+            break;
> 
> What is the difference to one worker->stop() + worker->start() cycle?

this won't disconnect any clients.

> 
> cheers,
>   Gerd
>
Alon Levy June 20, 2011, 12:58 p.m. UTC | #3
On Mon, Jun 20, 2011 at 02:13:36PM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >+    case QXL_IO_UPDATE_MEM:
> >+        switch (val) {
> >+        case (QXL_UPDATE_MEM_RENDER_ALL):
> >+            d->ssd.worker->update_mem(d->ssd.worker);
> >+            break;
> 
> What is the difference to one worker->stop() + worker->start() cycle?
> 

ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
I'll have to look, I don't know if it does.

> cheers,
>   Gerd
>
Gerd Hoffmann June 20, 2011, 2:07 p.m. UTC | #4
>> What is the difference to one worker->stop() + worker->start() cycle?
>>
>
> ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> I'll have to look, I don't know if it does.

It does.  This is what qemu uses to flush all spice server state to 
device memory on migration.

What is the reason for deleting all surfaces?

cheers,
   Gerd
Alon Levy June 20, 2011, 3:11 p.m. UTC | #5
On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
> >>What is the difference to one worker->stop() + worker->start() cycle?
> >>
> >
> >ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> >I'll have to look, I don't know if it does.
> 
> It does.  This is what qemu uses to flush all spice server state to
> device memory on migration.
> 
> What is the reason for deleting all surfaces?

Making sure all references are dropped to pci memory in devram. We would need to recreate all
the surfaces after reset anyway.

> 
> cheers,
>   Gerd
>
Alon Levy June 20, 2011, 3:48 p.m. UTC | #6
On Mon, Jun 20, 2011 at 05:11:07PM +0200, Alon Levy wrote:
> On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
> > >>What is the difference to one worker->stop() + worker->start() cycle?
> > >>
> > >
> > >ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> > >I'll have to look, I don't know if it does.
> > 
> > It does.  This is what qemu uses to flush all spice server state to
> > device memory on migration.
> > 
> > What is the reason for deleting all surfaces?
> 
> Making sure all references are dropped to pci memory in devram. We would need to recreate all
> the surfaces after reset anyway.

That's not right. The reason is that for the windows driver I don't know
if this is a resolution change or a suspend. So it was easier to destroy all the surfaces and then the two cases are equal - before going to sleep / leaving the current resolution I destroy all the surfaces, when coming back I recreate the surfaces. If it's a resolution change there is no coming back stage, but since all surfaces are destroyed there is no error when the same surface id's are reused.

> 
> > 
> > cheers,
> >   Gerd
> > 
>
Gerd Hoffmann June 20, 2011, 3:50 p.m. UTC | #7
On 06/20/11 17:11, Alon Levy wrote:
> On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
>>>> What is the difference to one worker->stop() + worker->start() cycle?
>>>>
>>>
>>> ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
>>> I'll have to look, I don't know if it does.
>>
>> It does.  This is what qemu uses to flush all spice server state to
>> device memory on migration.
>>
>> What is the reason for deleting all surfaces?
>
> Making sure all references are dropped to pci memory in devram.

Ah, because the spice server keeps a reference to the create command 
until the surface is destroyed, right?

There is is QXL_IO_DESTROY_ALL_SURFACES + worker->destroy_surfaces() ...

The QXL_IO_UPDATE_MEM command does too much special stuff IMHO.
I also think we don't need to extend the libspice-server API.

We can add a I/O command which renders everything to device memory via 
stop+start.  We can zap all surfaces with the existing command + worker 
call.  We can add a I/O command to ask qxl to push the release queue 
head to the release ring.

Comments?

cheers,
   Gerd
Alon Levy June 20, 2011, 4:32 p.m. UTC | #8
On Mon, Jun 20, 2011 at 05:50:32PM +0200, Gerd Hoffmann wrote:
> On 06/20/11 17:11, Alon Levy wrote:
> >On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
> >>>>What is the difference to one worker->stop() + worker->start() cycle?
> >>>>
> >>>
> >>>ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> >>>I'll have to look, I don't know if it does.
> >>
> >>It does.  This is what qemu uses to flush all spice server state to
> >>device memory on migration.
> >>
> >>What is the reason for deleting all surfaces?
> >
> >Making sure all references are dropped to pci memory in devram.
> 
> Ah, because the spice server keeps a reference to the create command
> until the surface is destroyed, right?

Actually right, so my correction stands corrected.

> 
> There is is QXL_IO_DESTROY_ALL_SURFACES + worker->destroy_surfaces() ...
> 

Regarding QXL_IO_DESTROY_ALL_SURFACES, it destroys the primary surface too,
which is a little special, that's another difference - update_mem destroys
everything except the primary. I know I tried to destroy the primary but it
didn't work right, don't recall why right now, so I guess I'll have to retry.

> The QXL_IO_UPDATE_MEM command does too much special stuff IMHO.
> I also think we don't need to extend the libspice-server API.
> 
> We can add a I/O command which renders everything to device memory
> via stop+start.  We can zap all surfaces with the existing command +
Yes, start+stop work nicely, didn't realize (saw it before, assumed
it wouldn't be good enough), just need to destroy the surfaces too.

> worker call.  We can add a I/O command to ask qxl to push the
> release queue head to the release ring.

So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
of using the val parameter?
 QXL_IO_UPDATE_MEM
 QXL_IO_FLUSH_RELEASE
?

> 
> Comments?
> 
> cheers,
>   Gerd
>
Yonit Halperin June 21, 2011, 6:29 a.m. UTC | #9
On 06/20/2011 07:32 PM, Alon Levy wrote:
> On Mon, Jun 20, 2011 at 05:50:32PM +0200, Gerd Hoffmann wrote:
>> On 06/20/11 17:11, Alon Levy wrote:
>>> On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
>>>>>> What is the difference to one worker->stop() + worker->start() cycle?
>>>>>>
>>>>>
>>>>> ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
>>>>> I'll have to look, I don't know if it does.
>>>>
>>>> It does.  This is what qemu uses to flush all spice server state to
>>>> device memory on migration.
I don't see that STOP flushes the command rings. We must read and empty 
the command rings before going to S3.
>>>>
>>>> What is the reason for deleting all surfaces?
>>>
>>> Making sure all references are dropped to pci memory in devram.
>>
>> Ah, because the spice server keeps a reference to the create command
>> until the surface is destroyed, right?
>
> Actually right, so my correction stands corrected.
>
>>
>> There is is QXL_IO_DESTROY_ALL_SURFACES + worker->destroy_surfaces() ...
>>
>
> Regarding QXL_IO_DESTROY_ALL_SURFACES, it destroys the primary surface too,
> which is a little special, that's another difference - update_mem destroys
> everything except the primary. I know I tried to destroy the primary but it
> didn't work right, don't recall why right now, so I guess I'll have to retry.
>
I guess it is because DrvAssertMode(disable) destroyed the primary 
surface in a separate call. Don't think we need to separate the calls 
any more (we probably needed it when we thought S3 and resolution 
changes will have different paths).

>> The QXL_IO_UPDATE_MEM command does too much special stuff IMHO.
>> I also think we don't need to extend the libspice-server API.
>>
>> We can add a I/O command which renders everything to device memory
>> via stop+start.  We can zap all surfaces with the existing command +
> Yes, start+stop work nicely, didn't realize (saw it before, assumed
> it wouldn't be good enough), just need to destroy the surfaces too.
>
>> worker call.  We can add a I/O command to ask qxl to push the
>> release queue head to the release ring.
>
> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> of using the val parameter?
>   QXL_IO_UPDATE_MEM
>   QXL_IO_FLUSH_RELEASE
> ?
>
>>
>> Comments?
>>
>> cheers,
>>    Gerd
>>
Gerd Hoffmann June 22, 2011, 9:13 a.m. UTC | #10
Hi,

>> worker call.  We can add a I/O command to ask qxl to push the
>> release queue head to the release ring.
>
> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> of using the val parameter?

I'd like to (a) avoid updating the libspice-server API if possible and 
(b) have one I/O command for each logical step.  So going into S3 could 
look like this:

   (0) stop putting new commands into the rings
   (1) QXL_IO_NOTIFY_CMD
   (2) QXL_IO_NOTIFY_CURSOR
       qxl calls notify(), to make the worker thread empty the command
       rings before it processes the next dispatcher request.
   (3) QXl_IO_FLUSH_SURFACES (to be implemented)
       qxl calls stop()+start(), spice-server renders all surfaces,
       thereby flushing state to device memory.
   (4) QXL_IO_DESTROY_ALL_SURFACES
       zap surfaces
   (5) QXL_IO_FLUSH_RELEASE (to be implemented)
       push release queue head into the release ring, so the guest
       will see it and can release everything.

(1)+(2)+(4) exist already.
(3)+(5) can be done without libspice-server changes.

Looks workable?

cheers,
   Gerd
Alon Levy June 22, 2011, 9:57 a.m. UTC | #11
On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >>worker call.  We can add a I/O command to ask qxl to push the
> >>release queue head to the release ring.
> >
> >So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >of using the val parameter?
> 
> I'd like to (a) avoid updating the libspice-server API if possible
> and (b) have one I/O command for each logical step.  So going into
> S3 could look like this:
> 
>   (0) stop putting new commands into the rings
>   (1) QXL_IO_NOTIFY_CMD
>   (2) QXL_IO_NOTIFY_CURSOR
>       qxl calls notify(), to make the worker thread empty the command
>       rings before it processes the next dispatcher request.
>   (3) QXl_IO_FLUSH_SURFACES (to be implemented)
>       qxl calls stop()+start(), spice-server renders all surfaces,
>       thereby flushing state to device memory.
>   (4) QXL_IO_DESTROY_ALL_SURFACES
>       zap surfaces
>   (5) QXL_IO_FLUSH_RELEASE (to be implemented)
>       push release queue head into the release ring, so the guest
>       will see it and can release everything.
> 
> (1)+(2)+(4) exist already.
> (3)+(5) can be done without libspice-server changes.
> 
> Looks workable?

yeah. Yonit?

> 
> cheers,
>   Gerd
> 
>
Yonit Halperin June 26, 2011, 4:59 p.m. UTC | #12
Sorry for the late response, wasn't available.
I'm afraid  that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.


----- Original Message -----
From: "Alon Levy" <alevy@redhat.com>
To: "Gerd Hoffmann" <kraxel@redhat.com>
Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
Sent: Wednesday, June 22, 2011 11:57:54 AM
Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support

On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >>worker call.  We can add a I/O command to ask qxl to push the
> >>release queue head to the release ring.
> >
> >So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >of using the val parameter?
> 
> I'd like to (a) avoid updating the libspice-server API if possible
> and (b) have one I/O command for each logical step.  So going into
> S3 could look like this:
> 
>   (0) stop putting new commands into the rings
>   (1) QXL_IO_NOTIFY_CMD
>   (2) QXL_IO_NOTIFY_CURSOR
>       qxl calls notify(), to make the worker thread empty the command
>       rings before it processes the next dispatcher request.
>   (3) QXl_IO_FLUSH_SURFACES (to be implemented)
>       qxl calls stop()+start(), spice-server renders all surfaces,
>       thereby flushing state to device memory.
>   (4) QXL_IO_DESTROY_ALL_SURFACES
>       zap surfaces
>   (5) QXL_IO_FLUSH_RELEASE (to be implemented)
>       push release queue head into the release ring, so the guest
>       will see it and can release everything.
> 
> (1)+(2)+(4) exist already.
> (3)+(5) can be done without libspice-server changes.
> 
> Looks workable?

yeah. Yonit?

> 
> cheers,
>   Gerd
> 
>
Alon Levy June 26, 2011, 5:47 p.m. UTC | #13
On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
> Sorry for the late response, wasn't available.
> I'm afraid  that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
> 

I actually can't figure out what wakeup does (that's what both NOTIFY and
NOTIFY_CURSOR do, see hw/qxl.c). What we did in prepare_to_sleep before was
call flush_all_qxl_commands, which reads the command and cursor rings until
they are empty (calling flush_cursor_commands and flush_display_commands), waiting
whenever the pipe is too large. (avoiding this wait still needs to be done, but
after ensuring suspend is correct).

More to the point this flush is done from handle_dev_destroy_surfaces, but
this is not good enough since this also destroys the surfaces, before we have
a chance to actually render the surfaces.

Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?

> 
> ----- Original Message -----
> From: "Alon Levy" <alevy@redhat.com>
> To: "Gerd Hoffmann" <kraxel@redhat.com>
> Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
> Sent: Wednesday, June 22, 2011 11:57:54 AM
> Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
> 
> On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> >   Hi,
> > 
> > >>worker call.  We can add a I/O command to ask qxl to push the
> > >>release queue head to the release ring.
> > >
> > >So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> > >of using the val parameter?
> > 
> > I'd like to (a) avoid updating the libspice-server API if possible
> > and (b) have one I/O command for each logical step.  So going into
> > S3 could look like this:
> > 
> >   (0) stop putting new commands into the rings
> >   (1) QXL_IO_NOTIFY_CMD
> >   (2) QXL_IO_NOTIFY_CURSOR
> >       qxl calls notify(), to make the worker thread empty the command
> >       rings before it processes the next dispatcher request.
> >   (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> >       qxl calls stop()+start(), spice-server renders all surfaces,
> >       thereby flushing state to device memory.
> >   (4) QXL_IO_DESTROY_ALL_SURFACES
> >       zap surfaces
> >   (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> >       push release queue head into the release ring, so the guest
> >       will see it and can release everything.
> > 
> > (1)+(2)+(4) exist already.
> > (3)+(5) can be done without libspice-server changes.
> > 
> > Looks workable?
> 
> yeah. Yonit?
> 
> > 
> > cheers,
> >   Gerd
> > 
> > 
>
Yonit Halperin June 27, 2011, 6:28 a.m. UTC | #14
On 06/26/2011 08:47 PM, Alon Levy wrote:
> On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
>> Sorry for the late response, wasn't available.
>> I'm afraid  that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
>>
>
> I actually can't figure out what wakeup does (that's what both NOTIFY and
> NOTIFY_CURSOR do, see hw/qxl.c).
It turns on an event the worker is waiting for on epoll_wait.

What we did in prepare_to_sleep before was
> call flush_all_qxl_commands, which reads the command and cursor rings until
> they are empty (calling flush_cursor_commands and flush_display_commands), waiting
> whenever the pipe is too large. (avoiding this wait still needs to be done, but
> after ensuring suspend is correct).
>
> More to the point this flush is done from handle_dev_destroy_surfaces, but
> this is not good enough since this also destroys the surfaces, before we have
> a chance to actually render the surfaces.
>
> Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
>
We can do it as long as it doesn't affect migration - does STOP happens 
before or after savevm? If it happens after - we can't touch the command 
ring, i.e., we can't call flush. And even if it happens before - do we 
really want to call flush during migration and presumably slow it down?

Cheers,
Yonit.

>>
>> ----- Original Message -----
>> From: "Alon Levy"<alevy@redhat.com>
>> To: "Gerd Hoffmann"<kraxel@redhat.com>
>> Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
>> Sent: Wednesday, June 22, 2011 11:57:54 AM
>> Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
>>
>> On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
>>>    Hi,
>>>
>>>>> worker call.  We can add a I/O command to ask qxl to push the
>>>>> release queue head to the release ring.
>>>>
>>>> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
>>>> of using the val parameter?
>>>
>>> I'd like to (a) avoid updating the libspice-server API if possible
>>> and (b) have one I/O command for each logical step.  So going into
>>> S3 could look like this:
>>>
>>>    (0) stop putting new commands into the rings
>>>    (1) QXL_IO_NOTIFY_CMD
>>>    (2) QXL_IO_NOTIFY_CURSOR
>>>        qxl calls notify(), to make the worker thread empty the command
>>>        rings before it processes the next dispatcher request.
>>>    (3) QXl_IO_FLUSH_SURFACES (to be implemented)
>>>        qxl calls stop()+start(), spice-server renders all surfaces,
>>>        thereby flushing state to device memory.
>>>    (4) QXL_IO_DESTROY_ALL_SURFACES
>>>        zap surfaces
>>>    (5) QXL_IO_FLUSH_RELEASE (to be implemented)
>>>        push release queue head into the release ring, so the guest
>>>        will see it and can release everything.
>>>
>>> (1)+(2)+(4) exist already.
>>> (3)+(5) can be done without libspice-server changes.
>>>
>>> Looks workable?
>>
>> yeah. Yonit?
>>
>>>
>>> cheers,
>>>    Gerd
>>>
>>>
>>
Alon Levy June 27, 2011, 8:16 a.m. UTC | #15
On Mon, Jun 27, 2011 at 09:28:45AM +0300, yhalperi wrote:
> On 06/26/2011 08:47 PM, Alon Levy wrote:
> >On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
> >>Sorry for the late response, wasn't available.
> >>I'm afraid  that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
> >>
> >
> >I actually can't figure out what wakeup does (that's what both NOTIFY and
> >NOTIFY_CURSOR do, see hw/qxl.c).
> It turns on an event the worker is waiting for on epoll_wait.
Ah, ok. So it will only read up to the pipe size like you said.

> 
> What we did in prepare_to_sleep before was
> >call flush_all_qxl_commands, which reads the command and cursor rings until
> >they are empty (calling flush_cursor_commands and flush_display_commands), waiting
> >whenever the pipe is too large. (avoiding this wait still needs to be done, but
> >after ensuring suspend is correct).
> >
> >More to the point this flush is done from handle_dev_destroy_surfaces, but
> >this is not good enough since this also destroys the surfaces, before we have
> >a chance to actually render the surfaces.
> >
> >Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
> >
> We can do it as long as it doesn't affect migration - does STOP
> happens before or after savevm? If it happens after - we can't touch
> the command ring, i.e., we can't call flush. And even if it happens
> before - do we really want to call flush during migration and
> presumably slow it down?
But if we don't then the client connected before migration doesn't get some of
the messages it was supposed to get.

stop is called before hw/qxl.c:qxl_pre_save, from
ui/spice-display.c:qemu_spice_vm_change_state_handler


> 
> Cheers,
> Yonit.
> 
> >>
> >>----- Original Message -----
> >>From: "Alon Levy"<alevy@redhat.com>
> >>To: "Gerd Hoffmann"<kraxel@redhat.com>
> >>Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
> >>Sent: Wednesday, June 22, 2011 11:57:54 AM
> >>Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
> >>
> >>On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> >>>   Hi,
> >>>
> >>>>>worker call.  We can add a I/O command to ask qxl to push the
> >>>>>release queue head to the release ring.
> >>>>
> >>>>So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >>>>of using the val parameter?
> >>>
> >>>I'd like to (a) avoid updating the libspice-server API if possible
> >>>and (b) have one I/O command for each logical step.  So going into
> >>>S3 could look like this:
> >>>
> >>>   (0) stop putting new commands into the rings
> >>>   (1) QXL_IO_NOTIFY_CMD
> >>>   (2) QXL_IO_NOTIFY_CURSOR
> >>>       qxl calls notify(), to make the worker thread empty the command
> >>>       rings before it processes the next dispatcher request.
> >>>   (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> >>>       qxl calls stop()+start(), spice-server renders all surfaces,
> >>>       thereby flushing state to device memory.
> >>>   (4) QXL_IO_DESTROY_ALL_SURFACES
> >>>       zap surfaces
> >>>   (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> >>>       push release queue head into the release ring, so the guest
> >>>       will see it and can release everything.
> >>>
> >>>(1)+(2)+(4) exist already.
> >>>(3)+(5) can be done without libspice-server changes.
> >>>
> >>>Looks workable?
> >>
> >>yeah. Yonit?
> >>
> >>>
> >>>cheers,
> >>>   Gerd
> >>>
> >>>
> >>
> 
>
Yonit Halperin June 27, 2011, 8:25 a.m. UTC | #16
On 06/27/2011 11:16 AM, Alon Levy wrote:
> On Mon, Jun 27, 2011 at 09:28:45AM +0300, yhalperi wrote:
>> On 06/26/2011 08:47 PM, Alon Levy wrote:
>>> On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
>>>> Sorry for the late response, wasn't available.
>>>> I'm afraid  that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
>>>>
>>>
>>> I actually can't figure out what wakeup does (that's what both NOTIFY and
>>> NOTIFY_CURSOR do, see hw/qxl.c).
>> It turns on an event the worker is waiting for on epoll_wait.
> Ah, ok. So it will only read up to the pipe size like you said.
>
>>
>> What we did in prepare_to_sleep before was
>>> call flush_all_qxl_commands, which reads the command and cursor rings until
>>> they are empty (calling flush_cursor_commands and flush_display_commands), waiting
>>> whenever the pipe is too large. (avoiding this wait still needs to be done, but
>>> after ensuring suspend is correct).
>>>
>>> More to the point this flush is done from handle_dev_destroy_surfaces, but
>>> this is not good enough since this also destroys the surfaces, before we have
>>> a chance to actually render the surfaces.
>>>
>>> Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
>>>
>> We can do it as long as it doesn't affect migration - does STOP
>> happens before or after savevm? If it happens after - we can't touch
>> the command ring, i.e., we can't call flush. And even if it happens
>> before - do we really want to call flush during migration and
>> presumably slow it down?
> But if we don't then the client connected before migration doesn't get some of
> the messages it was supposed to get.
I think it will receive them after migration, since the command ring 
was stored.
>
> stop is called before hw/qxl.c:qxl_pre_save, from
> ui/spice-display.c:qemu_spice_vm_change_state_handler
>
>
>>
>> Cheers,
>> Yonit.
>>
>>>>
>>>> ----- Original Message -----
>>>> From: "Alon Levy"<alevy@redhat.com>
>>>> To: "Gerd Hoffmann"<kraxel@redhat.com>
>>>> Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
>>>> Sent: Wednesday, June 22, 2011 11:57:54 AM
>>>> Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
>>>>
>>>> On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
>>>>>    Hi,
>>>>>
>>>>>>> worker call.  We can add a I/O command to ask qxl to push the
>>>>>>> release queue head to the release ring.
>>>>>>
>>>>>> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
>>>>>> of using the val parameter?
>>>>>
>>>>> I'd like to (a) avoid updating the libspice-server API if possible
>>>>> and (b) have one I/O command for each logical step.  So going into
>>>>> S3 could look like this:
>>>>>
>>>>>    (0) stop putting new commands into the rings
>>>>>    (1) QXL_IO_NOTIFY_CMD
>>>>>    (2) QXL_IO_NOTIFY_CURSOR
>>>>>        qxl calls notify(), to make the worker thread empty the command
>>>>>        rings before it processes the next dispatcher request.
>>>>>    (3) QXl_IO_FLUSH_SURFACES (to be implemented)
>>>>>        qxl calls stop()+start(), spice-server renders all surfaces,
>>>>>        thereby flushing state to device memory.
>>>>>    (4) QXL_IO_DESTROY_ALL_SURFACES
>>>>>        zap surfaces
>>>>>    (5) QXL_IO_FLUSH_RELEASE (to be implemented)
>>>>>        push release queue head into the release ring, so the guest
>>>>>        will see it and can release everything.
>>>>>
>>>>> (1)+(2)+(4) exist already.
>>>>> (3)+(5) can be done without libspice-server changes.
>>>>>
>>>>> Looks workable?
>>>>
>>>> yeah. Yonit?
>>>>
>>>>>
>>>>> cheers,
>>>>>    Gerd
>>>>>
>>>>>
>>>>
>>
>>
Alon Levy June 27, 2011, 9:20 a.m. UTC | #17
On Mon, Jun 27, 2011 at 11:25:47AM +0300, yhalperi wrote:
> On 06/27/2011 11:16 AM, Alon Levy wrote:
> >On Mon, Jun 27, 2011 at 09:28:45AM +0300, yhalperi wrote:
> >>On 06/26/2011 08:47 PM, Alon Levy wrote:
> >>>On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
> >>>>Sorry for the late response, wasn't available.
> >>>>I'm afraid  that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
> >>>>
> >>>
> >>>I actually can't figure out what wakeup does (that's what both NOTIFY and
> >>>NOTIFY_CURSOR do, see hw/qxl.c).
> >>It turns on an event the worker is waiting for on epoll_wait.
> >Ah, ok. So it will only read up to the pipe size like you said.
> >
> >>
> >>What we did in prepare_to_sleep before was
> >>>call flush_all_qxl_commands, which reads the command and cursor rings until
> >>>they are empty (calling flush_cursor_commands and flush_display_commands), waiting
> >>>whenever the pipe is too large. (avoiding this wait still needs to be done, but
> >>>after ensuring suspend is correct).
> >>>
> >>>More to the point this flush is done from handle_dev_destroy_surfaces, but
> >>>this is not good enough since this also destroys the surfaces, before we have
> >>>a chance to actually render the surfaces.
> >>>
> >>>Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
> >>>
> >>We can do it as long as it doesn't affect migration - does STOP
> >>happens before or after savevm? If it happens after - we can't touch
> >>the command ring, i.e., we can't call flush. And even if it happens
> >>before - do we really want to call flush during migration and
> >>presumably slow it down?
> >But if we don't then the client connected before migration doesn't get some of
> >the messages it was supposed to get.
> I think it will receive them after migration, since the command ring
> was stored.
Our confusion here is because you think there is still seemless migration. Unfortunately
it doesn't work right now, unless you plan to fix it the only form of migration right
now is switch-host, and for that those commands will be lost, the new connection will receive
images for each surface. If you treat the client as seemless you are completely right.

> >
> >stop is called before hw/qxl.c:qxl_pre_save, from
> >ui/spice-display.c:qemu_spice_vm_change_state_handler
> >
> >
> >>
> >>Cheers,
> >>Yonit.
> >>
> >>>>
> >>>>----- Original Message -----
> >>>>From: "Alon Levy"<alevy@redhat.com>
> >>>>To: "Gerd Hoffmann"<kraxel@redhat.com>
> >>>>Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
> >>>>Sent: Wednesday, June 22, 2011 11:57:54 AM
> >>>>Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
> >>>>
> >>>>On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> >>>>>   Hi,
> >>>>>
> >>>>>>>worker call.  We can add a I/O command to ask qxl to push the
> >>>>>>>release queue head to the release ring.
> >>>>>>
> >>>>>>So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >>>>>>of using the val parameter?
> >>>>>
> >>>>>I'd like to (a) avoid updating the libspice-server API if possible
> >>>>>and (b) have one I/O command for each logical step.  So going into
> >>>>>S3 could look like this:
> >>>>>
> >>>>>   (0) stop putting new commands into the rings
> >>>>>   (1) QXL_IO_NOTIFY_CMD
> >>>>>   (2) QXL_IO_NOTIFY_CURSOR
> >>>>>       qxl calls notify(), to make the worker thread empty the command
> >>>>>       rings before it processes the next dispatcher request.
> >>>>>   (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> >>>>>       qxl calls stop()+start(), spice-server renders all surfaces,
> >>>>>       thereby flushing state to device memory.
> >>>>>   (4) QXL_IO_DESTROY_ALL_SURFACES
> >>>>>       zap surfaces
> >>>>>   (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> >>>>>       push release queue head into the release ring, so the guest
> >>>>>       will see it and can release everything.
> >>>>>
> >>>>>(1)+(2)+(4) exist already.
> >>>>>(3)+(5) can be done without libspice-server changes.
> >>>>>
> >>>>>Looks workable?
> >>>>
> >>>>yeah. Yonit?
> >>>>
> >>>>>
> >>>>>cheers,
> >>>>>   Gerd
> >>>>>
> >>>>>
> >>>>
> >>
> >>
>
Gerd Hoffmann June 29, 2011, 9:01 a.m. UTC | #18
Hi,

>> I think it will receive them after migration, since the command ring
>> was stored.
> Our confusion here is because you think there is still seemless migration. Unfortunately
> it doesn't work right now, unless you plan to fix it the only form of migration right
> now is switch-host, and for that those commands will be lost, the new connection will receive
> images for each surface. If you treat the client as seemless you are completely right.

The spice server needs this too so it can render the surfaces correctly 
before sending the surface images to the client (or send the old 
surfaces and the commands on top of that).

That is one difference between qemu migration and S3 state: For qemu 
migration it is no problem to have unprocessed commands in the rings, 
they will simply be processed once the spice server state is restored. 
When the guest driver restores the state when it comes back from S3 it 
needs the command rings to do so, thats why they must be flushed before 
entering S3 ...

cheers,
   Gerd
Alon Levy June 29, 2011, 9:21 a.m. UTC | #19
On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >>I think it will receive them after migration, since the command ring
> >>was stored.
> >Our confusion here is because you think there is still seemless migration. Unfortunately
> >it doesn't work right now, unless you plan to fix it the only form of migration right
> >now is switch-host, and for that those commands will be lost, the new connection will receive
> >images for each surface. If you treat the client as seemless you are completely right.
> 
> The spice server needs this too so it can render the surfaces
> correctly before sending the surface images to the client (or send
> the old surfaces and the commands on top of that).
> 
> That is one difference between qemu migration and S3 state: For qemu
> migration it is no problem to have unprocessed commands in the
> rings, they will simply be processed once the spice server state is
> restored. When the guest driver restores the state when it comes
> back from S3 it needs the command rings to do so, thats why they
> must be flushed before entering S3 ...

You mean it needs the command rings to be empty before, since they are lost
during the reset, right?

> 
> cheers,
>   Gerd
>
Gerd Hoffmann June 29, 2011, 10:25 a.m. UTC | #20
On 06/29/11 11:21, Alon Levy wrote:
> On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
>>    Hi,
>>
>>>> I think it will receive them after migration, since the command ring
>>>> was stored.
>>> Our confusion here is because you think there is still seemless migration. Unfortunately
>>> it doesn't work right now, unless you plan to fix it the only form of migration right
>>> now is switch-host, and for that those commands will be lost, the new connection will receive
>>> images for each surface. If you treat the client as seemless you are completely right.
>>
>> The spice server needs this too so it can render the surfaces
>> correctly before sending the surface images to the client (or send
>> the old surfaces and the commands on top of that).
>>
>> That is one difference between qemu migration and S3 state: For qemu
>> migration it is no problem to have unprocessed commands in the
>> rings, they will simply be processed once the spice server state is
>> restored. When the guest driver restores the state when it comes
>> back from S3 it needs the command rings to do so, thats why they
>> must be flushed before entering S3 ...
>
> You mean it needs the command rings to be empty before, since they are lost
> during the reset, right?

One more reason.  Wasn't aware there is a reset anyway, was thinking 
more about the command ordering.  Without reset spice-server would first 
process the old commands (which may reference non-existing surfaces), 
then the new commands which re-recreate all state, which is simply the 
wrong order.  With reset the old commands just get lost which causes 
rendering bugs.

Is it an option to have the driver just remove the commands from the 
ring (and resubmit after resume)?  I suspect it isn't as there is no 
race-free way to do that, right?

cheers,
   Gerd
Alon Levy June 29, 2011, 11:38 a.m. UTC | #21
On Wed, Jun 29, 2011 at 12:25:00PM +0200, Gerd Hoffmann wrote:
> On 06/29/11 11:21, Alon Levy wrote:
> >On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
> >>   Hi,
> >>
> >>>>I think it will receive them after migration, since the command ring
> >>>>was stored.
> >>>Our confusion here is because you think there is still seemless migration. Unfortunately
> >>>it doesn't work right now, unless you plan to fix it the only form of migration right
> >>>now is switch-host, and for that those commands will be lost, the new connection will receive
> >>>images for each surface. If you treat the client as seemless you are completely right.
> >>
> >>The spice server needs this too so it can render the surfaces
> >>correctly before sending the surface images to the client (or send
> >>the old surfaces and the commands on top of that).
> >>
> >>That is one difference between qemu migration and S3 state: For qemu
> >>migration it is no problem to have unprocessed commands in the
> >>rings, they will simply be processed once the spice server state is
> >>restored. When the guest driver restores the state when it comes
> >>back from S3 it needs the command rings to do so, thats why they
> >>must be flushed before entering S3 ...
> >
> >You mean it needs the command rings to be empty before, since they are lost
> >during the reset, right?
> 
> One more reason.  Wasn't aware there is a reset anyway, was thinking
hah. The reset is the whole mess.. otherwise S3 would have been trivial,
and actually disabling the reset was the first thing we did, but of course
it doesn't solve S4 in that case.

> more about the command ordering.  Without reset spice-server would
> first process the old commands (which may reference non-existing
> surfaces), then the new commands which re-recreate all state, which
> is simply the wrong order.  With reset the old commands just get
> lost which causes rendering bugs.
> 
> Is it an option to have the driver just remove the commands from the
> ring (and resubmit after resume)?  I suspect it isn't as there is no
> race-free way to do that, right?
Right - the whole ring assumes that the same side removes. of course we
can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
the wrong approach. Instead, rendering all the commands, and dropping the
wait for the client. Right now if we flush we do actually wait for the client,
but I plan to remove this later. (we do this right now for update_area as
well and that's much higher frequency).

> 
> cheers,
>   Gerd
> 
>
Yonit Halperin June 30, 2011, 10:26 a.m. UTC | #22
On 06/29/2011 02:38 PM, Alon Levy wrote:
> On Wed, Jun 29, 2011 at 12:25:00PM +0200, Gerd Hoffmann wrote:
>> On 06/29/11 11:21, Alon Levy wrote:
>>> On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
>>>>    Hi,
>>>>
>>>>>> I think it will receive them after migration, since the command ring
>>>>>> was stored.
>>>>> Our confusion here is because you think there is still seemless migration. Unfortunately
>>>>> it doesn't work right now, unless you plan to fix it the only form of migration right
>>>>> now is switch-host, and for that those commands will be lost, the new connection will receive
>>>>> images for each surface. If you treat the client as seemless you are completely right.
>>>>
>>>> The spice server needs this too so it can render the surfaces
>>>> correctly before sending the surface images to the client (or send
>>>> the old surfaces and the commands on top of that).
>>>>
>>>> That is one difference between qemu migration and S3 state: For qemu
>>>> migration it is no problem to have unprocessed commands in the
>>>> rings, they will simply be processed once the spice server state is
>>>> restored. When the guest driver restores the state when it comes
>>>> back from S3 it needs the command rings to do so, thats why they
>>>> must be flushed before entering S3 ...
>>>
>>> You mean it needs the command rings to be empty before, since they are lost
>>> during the reset, right?
>>
>> One more reason.  Wasn't aware there is a reset anyway, was thinking
> hah. The reset is the whole mess.. otherwise S3 would have been trivial,
> and actually disabling the reset was the first thing we did, but of course
> it doesn't solve S4 in that case.
>
>> more about the command ordering.  Without reset spice-server would
>> first process the old commands (which may reference non-existing
>> surfaces), then the new commands which re-recreate all state, which
>> is simply the wrong order.  With reset the old commands just get
>> lost which causes rendering bugs.
>>
>> Is it an option to have the driver just remove the commands from the
>> ring (and resubmit after resume)?  I suspect it isn't as there is no
>> race-free way to do that, right?
> Right - the whole ring assumes that the same side removes. of course we
> can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
> the wrong approach. Instead, rendering all the commands, and dropping the
> wait for the client. Right now if we flush we do actually wait for the client,
> but I plan to remove this later. (we do this right now for update_area as
> well and that's much higher frequency).
>
>>
>> cheers,
>>    Gerd
>>
>>

To conclude, we still need to flush the command ring before stop. We 
don't want to change migration. So we still need to change spice server 
api. Gerd?
Gerd Hoffmann June 30, 2011, 10:46 a.m. UTC | #23
Hi,

>> Right - the whole ring assumes that the same side removes. of course we
>> can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
>> the wrong approach. Instead, rendering all the commands, and dropping the
>> wait for the client. Right now if we flush we do actually wait for the
>> client,
>> but I plan to remove this later. (we do this right now for update_area as
>> well and that's much higher frequency).

> To conclude, we still need to flush the command ring before stop. We
> don't want to change migration. So we still need to change spice server
> api. Gerd?

Yes, looks like there is no way around that to flush the command rings.

When we have to change the spice-server api anyway, then we should 
support async I/O at libspice-server API level I think.  Drop the qxl 
async thread, have a way to submit async requests to the worker, let 
libspice call us back on completion.

comments?

cheers,
   Gerd
Alon Levy June 30, 2011, 11:41 a.m. UTC | #24
On Thu, Jun 30, 2011 at 12:46:59PM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >>Right - the whole ring assumes that the same side removes. of course we
> >>can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
> >>the wrong approach. Instead, rendering all the commands, and dropping the
> >>wait for the client. Right now if we flush we do actually wait for the
> >>client,
> >>but I plan to remove this later. (we do this right now for update_area as
> >>well and that's much higher frequency).
> 
> >To conclude, we still need to flush the command ring before stop. We
> >don't want to change migration. So we still need to change spice server
> >api. Gerd?
> 
> Yes, looks like there is no way around that to flush the command rings.
> 
> When we have to change the spice-server api anyway, then we should
> support async I/O at libspice-server API level I think.  Drop the
> qxl async thread, have a way to submit async requests to the worker,
> let libspice call us back on completion.
> 
> comments?

My thoughts exactly. Any reason to support the old non async messages if we
do this? i.e. do we add _ASYNC versions or just change the meaning of the existing ones?
as long as we change the major version of the server (it will be 0.10) I think it isn't
problematic, right?

The only difference with this approach is that we will have to do the reads from the
io thread of qemu, which means a single thread reading for multiple qxl devices, but
since it will just be doing very minimal work I don't think it should be a problem (it
will just be setting the irq).

> 
> cheers,
>   Gerd
Gerd Hoffmann June 30, 2011, 12:12 p.m. UTC | #25
Hi,

> My thoughts exactly. Any reason to support the old non async messages if we
> do this?

Yes.  Backward compatibility.

> The only difference with this approach is that we will have to do the reads from the
> io thread of qemu,

Hmm?  Which reads?

I'd add a completion callback to QXLInterface.

cheers,
   Gerd
Alon Levy June 30, 2011, 12:50 p.m. UTC | #26
On Thu, Jun 30, 2011 at 02:12:52PM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >My thoughts exactly. Any reason to support the old non async messages if we
> >do this?
> 
> Yes.  Backward compatibility.

So at least deprecate it to be dropped later? I don't like that the code just gets
bigger and bigger.

> 
> >The only difference with this approach is that we will have to do the reads from the
> >io thread of qemu,
> 
> Hmm?  Which reads?

I was thinking of a different solution - one in which the same "READY" messages are
written, but read from a different place. That would not have actually required any changes
to the spice-server api. But if you say you prefer to add a completion callback, that's cool.

Just to answer, I was thinking of this flow for the async commands:

vcpu thread -> pipe_to_red_worker : update_area_async
red_worker thread -> pipe_to_io_thread : update_area_async complete

but that wouldn't have worked, would it? unless we made sure to prevent tries to do async/sync
while async in progress.

> 
> I'd add a completion callback to QXLInterface.
> 
> cheers,
>   Gerd
> 
>
Gerd Hoffmann June 30, 2011, 1:17 p.m. UTC | #27
Hi,

>> Yes.  Backward compatibility.
>
> So at least deprecate it to be dropped later? I don't like that the code just gets
> bigger and bigger.

Deprecate them is fine.

> I was thinking of a different solution - one in which the same "READY" messages are
> written, but read from a different place. That would not have actually required any changes
> to the spice-server api. But if you say you prefer to add a completion callback, that's cool.
>
> Just to answer, I was thinking of this flow for the async commands:
>
> vcpu thread ->  pipe_to_red_worker : update_area_async
> red_worker thread ->  pipe_to_io_thread : update_area_async complete
>
> but that wouldn't have worked, would it? unless we made sure to prevent tries to do async/sync
> while async in progress.

The pipe is a libspice-server internal thing and it should stay that 
way.  libspice-server should be able to use something completely 
different for dispatcher <-> worker communication (say a linked job list 
with mutex locking and condition variable wakeup) and everything should 
continue to work.

cheers,
   Gerd
diff mbox

Patch

diff --git a/hw/qxl.c b/hw/qxl.c
index ca5c8b3..4945d95 100644
--- a/hw/qxl.c
+++ b/hw/qxl.c
@@ -1016,6 +1016,32 @@  static void ioport_write(void *opaque, uint32_t addr, uint32_t val)
     case QXL_IO_DESTROY_ALL_SURFACES:
         d->ssd.worker->destroy_surfaces(d->ssd.worker);
         break;
+    case QXL_IO_UPDATE_MEM:
+        dprint(d, 1, "QXL_IO_UPDATE_MEM (%d) entry (%s, s#=%d, res#=%d)\n",
+            val, qxl_mode_to_string(d->mode), d->guest_surfaces.count,
+            d->num_free_res);
+        switch (val) {
+        case (QXL_UPDATE_MEM_RENDER_ALL):
+            d->ssd.worker->update_mem(d->ssd.worker);
+            break;
+        case (QXL_UPDATE_MEM_FLUSH): {
+            QXLReleaseRing *ring = &d->ram->release_ring;
+            if (ring->prod - ring->cons + 1 == ring->num_items) {
+                // TODO - "return" a value to the guest and let it loop?
+                fprintf(stderr,
+                    "ERROR: no flush, full release ring [p%d,%dc]\n",
+                    ring->prod, ring->cons);
+            }
+            qxl_push_free_res(d, 1 /* flush */);
+            dprint(d, 1, "QXL_IO_UPDATE_MEM exit (%s, s#=%d, res#=%d,%p)\n",
+                qxl_mode_to_string(d->mode), d->guest_surfaces.count,
+                d->num_free_res, d->last_release);
+            break;
+        }
+        default:
+            fprintf(stderr, "ERROR: unexpected value to QXL_IO_UPDATE_MEM\n");
+        }
+        break;
     default:
         fprintf(stderr, "%s: ioport=0x%x, abort()\n", __FUNCTION__, io_port);
         abort();