diff mbox

Patch to improve handling of server sockets

Message ID alpine.LNX.2.00.1005041450180.13167@nitsch.suse.de
State New
Headers show

Commit Message

Reinhard Max May 4, 2010, 1:49 p.m. UTC
Hi,

I am maintaining the tightvnc package for openSUSE and was recently 
confronted with an alleged vnc problem with QWMU that turned out to be 
a shortcoming in QEMU's code for handling TCP server sockets, which is 
used by the vnc and char modules.

The problem occurs when the address to listen on is given as a name 
which resolves to multiple IP addresses the most prominent example 
being "localhost" resolving to 127.0.0.1 and ::1 .

The existing code stopped walking the list of addresses returned by 
getaddrinfo() as soon as one socket was successfully opened and bound. 
The result was that a qemu instance started with "-vnc localhost:42" 
only listened on ::1, wasn't reachable through 127.0.0.1. The fact 
that the code set the IPV6_V6ONLY socket option didn't help, because 
that option only works when the socket gets bound to the IPv6 wildcard 
address (::), but is useless for explicit address bindings.

The attached patch against QEMU 0.11.0 extends inet_listen() to create 
sockets for as many addresses from the address list as possible and 
adapts its callers and their data structures to deal with a linked 
list of socket FDs rather than a single file descriptor.

So far I've only done some testing with the -vnc option. More testing 
is needed in the qemu-char area and for the parts of the code that get 
triggered from QEMU's Monitor.


Please review and comment.


cu
 	Reinhard

P.S. Please keep me in Cc when replying.

Comments

Anthony Liguori May 4, 2010, 4:23 p.m. UTC | #1
On 05/04/2010 08:49 AM, Reinhard Max wrote:
> Hi,
>
> I am maintaining the tightvnc package for openSUSE and was recently 
> confronted with an alleged vnc problem with QWMU that turned out to be 
> a shortcoming in QEMU's code for handling TCP server sockets, which is 
> used by the vnc and char modules.
>
> The problem occurs when the address to listen on is given as a name 
> which resolves to multiple IP addresses the most prominent example 
> being "localhost" resolving to 127.0.0.1 and ::1 .
>
> The existing code stopped walking the list of addresses returned by 
> getaddrinfo() as soon as one socket was successfully opened and bound. 
> The result was that a qemu instance started with "-vnc localhost:42" 
> only listened on ::1, wasn't reachable through 127.0.0.1. The fact 
> that the code set the IPV6_V6ONLY socket option didn't help, because 
> that option only works when the socket gets bound to the IPv6 wildcard 
> address (::), but is useless for explicit address bindings.
>
> The attached patch against QEMU 0.11.0 extends inet_listen() to create 
> sockets for as many addresses from the address list as possible and 
> adapts its callers and their data structures to deal with a linked 
> list of socket FDs rather than a single file descriptor.
>
> So far I've only done some testing with the -vnc option. More testing 
> is needed in the qemu-char area and for the parts of the code that get 
> triggered from QEMU's Monitor.

0.11.0 is pretty old.  Please update your patch against the latest git.

But that said, I'm not sure we're doing the wrong thing right now.  
Gerd, what do you think about this behavior?

Regards,

Anthony Liguori

>
> Please review and comment.
>
>
> cu
>     Reinhard
>
> P.S. Please keep me in Cc when replying.
Reinhard Max May 4, 2010, 8:44 p.m. UTC | #2
Hi,

On Tue, 4 May 2010 at 11:23, Anthony Liguori wrote:

> 0.11.0 is pretty old.  Please update your patch against the latest git.

Ok, will do.

> I'm not sure we're doing the wrong thing right now.

Well, I think it just can't be correct, that an IPv6-enabled process 
running on a dual-stack machine when it gets told to listen on 
"localhost", ends up only listening on ::1 and doesn't accept 
connections to 127.0.0.1.


cu
 	Reinhard
Anthony Liguori May 4, 2010, 9:47 p.m. UTC | #3
On 05/04/2010 03:44 PM, Reinhard Max wrote:
> Hi,
>
> On Tue, 4 May 2010 at 11:23, Anthony Liguori wrote:
>
>> 0.11.0 is pretty old.  Please update your patch against the latest git.
>
> Ok, will do.
>
>> I'm not sure we're doing the wrong thing right now.
>
> Well, I think it just can't be correct, that an IPv6-enabled process 
> running on a dual-stack machine when it gets told to listen on 
> "localhost", ends up only listening on ::1

My understanding is that for the majority of systems, 127.0.0.1 should 
still connect to ::1.  The reason we have the ipv4 and ipv6 options is 
because this doesn't work 100% of the time.  Maybe your configuration is 
an example of this.

Honestly, I don't understand how ipv4 compat is supposed to work outside 
of qemu so I'm looking for some additional input.

Regards,

Anthony Liguori

> and doesn't accept connections to 127.0.0.1.
>
>
> cu
>     Reinhard
Gerd Hoffmann May 4, 2010, 9:47 p.m. UTC | #4
On 05/04/10 18:23, Anthony Liguori wrote:
> On 05/04/2010 08:49 AM, Reinhard Max wrote:
>> Hi,
>>
>> I am maintaining the tightvnc package for openSUSE and was recently
>> confronted with an alleged vnc problem with QWMU that turned out to be
>> a shortcoming in QEMU's code for handling TCP server sockets, which is
>> used by the vnc and char modules.
>>
>> The problem occurs when the address to listen on is given as a name
>> which resolves to multiple IP addresses the most prominent example
>> being "localhost" resolving to 127.0.0.1 and ::1 .

My tigervnc (tightvnc successor) has IPv6 support and handles this just 
fine.  There is also the option to force qemu to listen on ipv4 (or 
ipv6) only.

>> The existing code stopped walking the list of addresses returned by
>> getaddrinfo() as soon as one socket was successfully opened and bound.
>> The result was that a qemu instance started with "-vnc localhost:42"
>> only listened on ::1, wasn't reachable through 127.0.0.1. The fact
>> that the code set the IPV6_V6ONLY socket option didn't help, because
>> that option only works when the socket gets bound to the IPv6 wildcard
>> address (::), but is useless for explicit address bindings.

Indeed.

> But that said, I'm not sure we're doing the wrong thing right now. Gerd,
> what do you think about this behavior?

Reinhard is correct.  If a hostname resolves to multiple addresses like 
this ...

    zweiblum kraxel ~# host zweiblum
    zweiblum.home.kraxel.org has address 192.168.2.101
    zweiblum.home.kraxel.org has IPv6 address 
2001:6f8:1cb3:2:216:41ff:fee1:3d40

... qemu should listen on all addresses returned.  Which in turn 
requires multiple listening sockets.

Changing this is a big deal though, thats why I've taken the somewhat 
unclean shortcut to listen on the first match only when implementing 
this.  Clients are supposed to try to connect to all addresses returned 
by the lookup (and they do if they got IPv6 support), thus this usually 
doesn't cause trouble in practice.

When going for multiple listening sockets in qemu we have to figure how 
we'll handle this in a number of places as there is no single listening 
address any more.  Reporting the vnc server address in QMP is one.  I'm 
sure there are more.

cheers,
   Gerd
Anthony Liguori May 4, 2010, 9:50 p.m. UTC | #5
On 05/04/2010 04:47 PM, Gerd Hoffmann wrote:
> On 05/04/10 18:23, Anthony Liguori wrote:
>> On 05/04/2010 08:49 AM, Reinhard Max wrote:
>>> Hi,
>>>
>>> I am maintaining the tightvnc package for openSUSE and was recently
>>> confronted with an alleged vnc problem with QWMU that turned out to be
>>> a shortcoming in QEMU's code for handling TCP server sockets, which is
>>> used by the vnc and char modules.
>>>
>>> The problem occurs when the address to listen on is given as a name
>>> which resolves to multiple IP addresses the most prominent example
>>> being "localhost" resolving to 127.0.0.1 and ::1 .
>
> My tigervnc (tightvnc successor) has IPv6 support and handles this 
> just fine.  There is also the option to force qemu to listen on ipv4 
> (or ipv6) only.
>
>>> The existing code stopped walking the list of addresses returned by
>>> getaddrinfo() as soon as one socket was successfully opened and bound.
>>> The result was that a qemu instance started with "-vnc localhost:42"
>>> only listened on ::1, wasn't reachable through 127.0.0.1. The fact
>>> that the code set the IPV6_V6ONLY socket option didn't help, because
>>> that option only works when the socket gets bound to the IPv6 wildcard
>>> address (::), but is useless for explicit address bindings.
>
> Indeed.
>
>> But that said, I'm not sure we're doing the wrong thing right now. Gerd,
>> what do you think about this behavior?
>
> Reinhard is correct.  If a hostname resolves to multiple addresses 
> like this ...
>
>    zweiblum kraxel ~# host zweiblum
>    zweiblum.home.kraxel.org has address 192.168.2.101
>    zweiblum.home.kraxel.org has IPv6 address 
> 2001:6f8:1cb3:2:216:41ff:fee1:3d40
>
> ... qemu should listen on all addresses returned.  Which in turn 
> requires multiple listening sockets.
>
> Changing this is a big deal though, thats why I've taken the somewhat 
> unclean shortcut to listen on the first match only when implementing 
> this.  Clients are supposed to try to connect to all addresses 
> returned by the lookup (and they do if they got IPv6 support), thus 
> this usually doesn't cause trouble in practice.
>
> When going for multiple listening sockets in qemu we have to figure 
> how we'll handle this in a number of places as there is no single 
> listening address any more.  Reporting the vnc server address in QMP 
> is one.  I'm sure there are more.

Okay, that makes sense.  Personally, I'm inclined to agree that this is 
a client problem.

Regards,

Anthony Liguori

> cheers,
>   Gerd
>
Reinhard Max May 4, 2010, 11:28 p.m. UTC | #6
On Tue, 4 May 2010 at 16:50, Anthony Liguori wrote:

> Personally, I'm inclined to agree that this is a client problem.

That would be a violation of the "be liberal in what you accept and 
conservative in what you produce" principle and there are plenty of 
scenarios where even a client that Does It Right[tm] would fail, one 
example being that both ends are in networks that use public IPv6 
addresses, but have no IPv6 routing to each other (and yes, I've seen 
such networks).


cu
 	Reinhard
Reinhard Max May 4, 2010, 11:42 p.m. UTC | #7
On Tue, 4 May 2010 at 23:47, Gerd Hoffmann wrote:

> My tigervnc (tightvnc successor) has IPv6 support and handles this 
> just fine.

Well, as I wrote, the code is not just used for vnc, but also for 
server sockets that (if I got that right) could be connected from 
telnet or netcat implementations that don't speak IPv6.

> When going for multiple listening sockets in qemu we have to figure 
> how we'll handle this in a number of places as there is no single 
> listening address any more.

Well, that's what my patch is about. Did you take a look at it?

> Reporting the vnc server address in QMP is one.

Not sure what QMP is (this was the first time I looked at QEMU's 
internals), but I think my patch only leaves one place TODO where I 
chose to report only the first address for now, but it shouldn't be 
too hard to fix that as well.

BTW, in some places I circumvented the need for reporting multiple 
addresses by simply reporting the name that was passed to QEMU 
instead.


cu
 	Reinhard
Daniel P. Berrangé May 5, 2010, 8:29 a.m. UTC | #8
On Tue, May 04, 2010 at 03:49:50PM +0200, Reinhard Max wrote:
> Hi,
> 
> I am maintaining the tightvnc package for openSUSE and was recently 
> confronted with an alleged vnc problem with QWMU that turned out to be 
> a shortcoming in QEMU's code for handling TCP server sockets, which is 
> used by the vnc and char modules.
> 
> The problem occurs when the address to listen on is given as a name 
> which resolves to multiple IP addresses the most prominent example 
> being "localhost" resolving to 127.0.0.1 and ::1 .
> 
> The existing code stopped walking the list of addresses returned by 
> getaddrinfo() as soon as one socket was successfully opened and bound. 
> The result was that a qemu instance started with "-vnc localhost:42" 
> only listened on ::1, wasn't reachable through 127.0.0.1. The fact 
> that the code set the IPV6_V6ONLY socket option didn't help, because 
> that option only works when the socket gets bound to the IPv6 wildcard 
> address (::), but is useless for explicit address bindings.

This seems to be something we overlooked in the initial impl. I don't
see any alternative but to make QEMU listen on multiple sockets in
the scenario you describe. An even clear example is to consider binding
QEMU to a machine with multiple non-localhost addresses, eg

   myserver.com resolving to 192.168.122.41 + 2a00:123:456::1

because there's no way that the kernel can know/decide to automatically
listen on 192.168.122.41, when given the address 2a00:123:456::1 and if
the machine has many NICs, you can't assume the wildcard address is 
suitable either.

> The attached patch against QEMU 0.11.0 extends inet_listen() to create 
> sockets for as many addresses from the address list as possible and 
> adapts its callers and their data structures to deal with a linked 
> list of socket FDs rather than a single file descriptor.

The approach looks reasonable, though the patch is somewhat mangled
by the mix of tabs + spaces

Regards,
Daniel
Gerd Hoffmann May 5, 2010, 8:53 a.m. UTC | #9
Hi,

>> When going for multiple listening sockets in qemu we have to figure
>> how we'll handle this in a number of places as there is no single
>> listening address any more.
>
> Well, that's what my patch is about.

Sure.

> Did you take a look at it?

Briefly, yes.  Overall it looks sensible to me.  Devil is in the details 
though, see below.

Noticed that it probably should get a few helper functions to handle 
FdLists to avoid the quite simliar open-coded loop-over-all-fds loops 
all over the place.

>> Reporting the vnc server address in QMP is one.
>
> Not sure what QMP is (this was the first time I looked at QEMU's
> internals),

You'll run into qmp for sure when forward-porting the patches to the 
latest qemu bits.  It is the machine-readable version of the monitor 
protocol (in qemu 0.12+).

> but I think my patch only leaves one place TODO where I
> chose to report only the first address for now, but it shouldn't be too
> hard to fix that as well.

Yea.  I've noticed that TODO ;)

> BTW, in some places I circumvented the need for reporting multiple
> addresses by simply reporting the name that was passed to QEMU instead.

This is one of the issues which needs to be addressed somehow.

First I think qemu should be self-consistent here, i.e. either report 
the (single) name or the list of addressed everythere.

Second we have to care about the current users (especially libvirt). 
Today qemu usually reports the address I think.  Thus I tend to stick to 
addresses to keep them happy.

We'll have a externally visible change in any case though.  Either the 
switch from the address to the name or the switch from a single address 
to a list of addresses.  Both changes might break existing users.

cheers,
   Gerd
Reinhard Max May 5, 2010, 10:42 a.m. UTC | #10
Hi,

On Wed, 5 May 2010 at 10:53, Gerd Hoffmann wrote:

> Noticed that it probably should get a few helper functions to handle 
> FdLists to avoid the quite simliar open-coded loop-over-all-fds 
> loops all over the place.

indeed, thanks for the hint. I now have functions to create a new list 
element from an fd number and to destroy a list. Not sure if straight 
loops over existing lists can be further optimized, though.

> You'll run into qmp for sure when forward-porting the patches to the 
> latest qemu bits.  It is the machine-readable version of the monitor 
> protocol (in qemu 0.12+).

I guess that's the qemu_opt_set() calls at the end of 
inet_listen_opts()?

> First I think qemu should be self-consistent here, i.e. either 
> report the (single) name or the list of addressed everythere.

Yes, this mixture wasn't meant to be final, but it helped me getting 
the initial patch done with a minimal set of changes.

> Second we have to care about the current users (especially libvirt).

Wouldn't the users of that bit of information run it through 
getaddrinfo() anyways when trying to connect? So to them it shouldn't 
matter whether the name or an ASCII representation of the address is 
used.

> Today qemu usually reports the address I think.  Thus I tend to 
> stick to addresses to keep them happy.

But wouldn't going from single address to multiple addresses be a 
bigger change for the users (and likely break them all) while going 
from address to name would only break those that were not using 
getaddrinfo() to translate the address into its binary representation.

OTOH, going for multiple addresses would also allow starting qemu with 
more than a single -vnc option, which doesn't seem to be possible 
right now, and wich might come handy in situations when the set of 
addresses a qemu instance should be listening on is not available as a 
single DNS name.


cu
 	Reinhard
Gerd Hoffmann May 5, 2010, 11:04 a.m. UTC | #11
Hi,

>> You'll run into qmp for sure when forward-porting the patches to the
>> latest qemu bits. It is the machine-readable version of the monitor
>> protocol (in qemu 0.12+).
>
> I guess that's the qemu_opt_set() calls at the end of inet_listen_opts()?

See docs in QMP/*, the changes in monitor.c and q${type}.[ch]

qemu_opt_set() in inet_listen_opts() is only slightly related.  It is 
used to report back the address we've actually bound to.  Used by 'info 
chardev' and I think vnc too.  Yes, that has to be changed somehow ...

>> Second we have to care about the current users (especially libvirt).
>
> Wouldn't the users of that bit of information run it through
> getaddrinfo() anyways when trying to connect? So to them it shouldn't
> matter whether the name or an ASCII representation of the address is used.

I don't know how it is used.

>> Today qemu usually reports the address I think. Thus I tend to stick
>> to addresses to keep them happy.
>
> But wouldn't going from single address to multiple addresses be a bigger
> change for the users (and likely break them all) while going from
> address to name would only break those that were not using getaddrinfo()
> to translate the address into its binary representation.

It is probably best to bring this up on the libvirt list, this is the
most important user of those interfaces and I think other virtual machine
management folks are reading there too.

I personally don't care too much which way we pick.

> OTOH, going for multiple addresses would also allow starting qemu with
> more than a single -vnc option, which doesn't seem to be possible right
> now, and wich might come handy in situations when the set of addresses a
> qemu instance should be listening on is not available as a single DNS name.

Good point.

cheers,
   Gerd
Daniel P. Berrangé May 5, 2010, 3:01 p.m. UTC | #12
On Tue, May 04, 2010 at 04:50:34PM -0500, Anthony Liguori wrote:
> On 05/04/2010 04:47 PM, Gerd Hoffmann wrote:
> >On 05/04/10 18:23, Anthony Liguori wrote:
> >>On 05/04/2010 08:49 AM, Reinhard Max wrote:
> >>>Hi,
> >>>
> >>>I am maintaining the tightvnc package for openSUSE and was recently
> >>>confronted with an alleged vnc problem with QWMU that turned out to be
> >>>a shortcoming in QEMU's code for handling TCP server sockets, which is
> >>>used by the vnc and char modules.
> >>>
> >>>The problem occurs when the address to listen on is given as a name
> >>>which resolves to multiple IP addresses the most prominent example
> >>>being "localhost" resolving to 127.0.0.1 and ::1 .
> >
> >My tigervnc (tightvnc successor) has IPv6 support and handles this 
> >just fine.  There is also the option to force qemu to listen on ipv4 
> >(or ipv6) only.
> >
> >>>The existing code stopped walking the list of addresses returned by
> >>>getaddrinfo() as soon as one socket was successfully opened and bound.
> >>>The result was that a qemu instance started with "-vnc localhost:42"
> >>>only listened on ::1, wasn't reachable through 127.0.0.1. The fact
> >>>that the code set the IPV6_V6ONLY socket option didn't help, because
> >>>that option only works when the socket gets bound to the IPv6 wildcard
> >>>address (::), but is useless for explicit address bindings.
> >
> >Indeed.
> >
> >>But that said, I'm not sure we're doing the wrong thing right now. Gerd,
> >>what do you think about this behavior?
> >
> >Reinhard is correct.  If a hostname resolves to multiple addresses 
> >like this ...
> >
> >   zweiblum kraxel ~# host zweiblum
> >   zweiblum.home.kraxel.org has address 192.168.2.101
> >   zweiblum.home.kraxel.org has IPv6 address 
> >2001:6f8:1cb3:2:216:41ff:fee1:3d40
> >
> >... qemu should listen on all addresses returned.  Which in turn 
> >requires multiple listening sockets.
> >
> >Changing this is a big deal though, thats why I've taken the somewhat 
> >unclean shortcut to listen on the first match only when implementing 
> >this.  Clients are supposed to try to connect to all addresses 
> >returned by the lookup (and they do if they got IPv6 support), thus 
> >this usually doesn't cause trouble in practice.
> >
> >When going for multiple listening sockets in qemu we have to figure 
> >how we'll handle this in a number of places as there is no single 
> >listening address any more.  Reporting the vnc server address in QMP 
> >is one.  I'm sure there are more.
> 
> Okay, that makes sense.  Personally, I'm inclined to agree that this is 
> a client problem.

It isn't a client problem if QEMU is not listening on all the required
addresses. In Gerd's case 

> >   zweiblum.home.kraxel.org has address 192.168.2.101
> >   zweiblum.home.kraxel.org has IPv6 address
> >2001:6f8:1cb3:2:216:41ff:fee1:3d40

If QEMU only listens on 2001:6f8:1cb3:2:216:41ff:fee1:3d40, and a VNC
client that only knows IPv4 tries to connect it will fail. There's nothing
the client can do to solve this. QEMU has to be listening on all addresses 
associated with a name for the dual-stack setup to provide correct fallback 
for clients.


Daniel
Daniel P. Berrangé May 5, 2010, 5:14 p.m. UTC | #13
On Wed, May 05, 2010 at 10:53:00AM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >>When going for multiple listening sockets in qemu we have to figure
> >>how we'll handle this in a number of places as there is no single
> >>listening address any more.
> >
> >Well, that's what my patch is about.
> 
> Sure.
> 
> >Did you take a look at it?
> 
> Briefly, yes.  Overall it looks sensible to me.  Devil is in the details 
> though, see below.
> 
> Noticed that it probably should get a few helper functions to handle 
> FdLists to avoid the quite simliar open-coded loop-over-all-fds loops 
> all over the place.
> 
> >>Reporting the vnc server address in QMP is one.
> >
> >Not sure what QMP is (this was the first time I looked at QEMU's
> >internals),
> 
> You'll run into qmp for sure when forward-porting the patches to the 
> latest qemu bits.  It is the machine-readable version of the monitor 
> protocol (in qemu 0.12+).
> 
> >but I think my patch only leaves one place TODO where I
> >chose to report only the first address for now, but it shouldn't be too
> >hard to fix that as well.
> 
> Yea.  I've noticed that TODO ;)
> 
> >BTW, in some places I circumvented the need for reporting multiple
> >addresses by simply reporting the name that was passed to QEMU instead.
> 
> This is one of the issues which needs to be addressed somehow.
> 
> First I think qemu should be self-consistent here, i.e. either report 
> the (single) name or the list of addressed everythere.
> 
> Second we have to care about the current users (especially libvirt). 
> Today qemu usually reports the address I think.  Thus I tend to stick to 
> addresses to keep them happy.

From a libvirt POV the only place we use the address is in the graphics
notification event(s). For the event we know which specific address out 
of the list to report - the one we just did accept() on. libvirt does not 
use the 'info vnc' or 'query-vnc' output at all, so whether that reports 
IP address of hostname doesn't matter to us. It is probably easiest to 
just make it report the same info that was provided on the CLI to -vnc 


Regards,
Daniel
diff mbox

Patch

Index: qemu-char.c
===================================================================
--- qemu-char.c.orig
+++ qemu-char.c
@@ -1831,7 +1831,8 @@  return_err:
 /* TCP Net console */
 
 typedef struct {
-    int fd, listen_fd;
+    int fd;
+    FdList *listen_fds;
     int connected;
     int max_size;
     int do_telnetopt;
@@ -1983,6 +1984,7 @@  static void tcp_chr_read(void *opaque)
     TCPCharDriver *s = chr->opaque;
     uint8_t buf[1024];
     int len, size;
+    FdList *fdl;
 
     if (!s->connected || s->max_size <= 0)
         return;
@@ -1993,10 +1995,9 @@  static void tcp_chr_read(void *opaque)
     if (size == 0) {
         /* connection closed */
         s->connected = 0;
-        if (s->listen_fd >= 0) {
-            qemu_set_fd_handler(s->listen_fd, tcp_chr_accept, NULL, chr);
-        }
-        qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
+	for (fdl = s->listen_fds; fdl != NULL; fdl = fdl->next)
+            qemu_set_fd_handler(fdl->fd, tcp_chr_accept, NULL, fdl);
+	qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
         closesocket(s->fd);
         s->fd = -1;
     } else if (size > 0) {
@@ -2045,7 +2046,8 @@  static void socket_set_nodelay(int fd)
 
 static void tcp_chr_accept(void *opaque)
 {
-    CharDriverState *chr = opaque;
+    FdList *fdl = opaque;
+    CharDriverState *chr = fdl->opaque;
     TCPCharDriver *s = chr->opaque;
     struct sockaddr_in saddr;
 #ifndef _WIN32
@@ -2066,7 +2068,7 @@  static void tcp_chr_accept(void *opaque)
 	    len = sizeof(saddr);
 	    addr = (struct sockaddr *)&saddr;
 	}
-        fd = accept(s->listen_fd, addr, &len);
+        fd = accept(fdl->fd, addr, &len);
         if (fd < 0 && errno != EINTR) {
             return;
         } else if (fd >= 0) {
@@ -2079,20 +2081,24 @@  static void tcp_chr_accept(void *opaque)
     if (s->do_nodelay)
         socket_set_nodelay(fd);
     s->fd = fd;
-    qemu_set_fd_handler(s->listen_fd, NULL, NULL, NULL);
+    for (fdl = s->listen_fds; fdl != NULL; fdl = fdl->next)
+	qemu_set_fd_handler(fdl->fd, NULL, NULL, NULL);
     tcp_chr_connect(chr);
 }
 
 static void tcp_chr_close(CharDriverState *chr)
 {
     TCPCharDriver *s = chr->opaque;
+    FdList *fdl, *fdtmp;
     if (s->fd >= 0) {
         qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
         closesocket(s->fd);
     }
-    if (s->listen_fd >= 0) {
-        qemu_set_fd_handler(s->listen_fd, NULL, NULL, NULL);
-        closesocket(s->listen_fd);
+    for (fdl = s->listen_fds; fdl != NULL; fdl = fdtmp) {
+	fdtmp = fdl->next;
+        qemu_set_fd_handler(fdl->fd, NULL, NULL, NULL);
+        closesocket(fdl->fd);
+	free(fdl);
     }
     qemu_free(s);
 }
@@ -2108,6 +2114,7 @@  static CharDriverState *qemu_chr_open_tc
     int is_waitconnect = 1;
     int do_nodelay = 0;
     const char *ptr;
+    FdList *sockets = NULL, *fdl, *fdtmp;
 
     ptr = host_str;
     while((ptr = strchr(ptr,','))) {
@@ -2155,11 +2162,18 @@  static CharDriverState *qemu_chr_open_tc
     } else {
         if (is_listen) {
             fd = inet_listen(host_str, chr->filename + offset, 256 - offset,
-                             SOCK_STREAM, 0);
+                             SOCK_STREAM, 0, &sockets);
         } else {
             fd = inet_connect(host_str, SOCK_STREAM);
         }
     }
+
+    if (sockets == NULL) {
+	sockets = malloc(sizeof(*sockets));
+	sockets->next = NULL;
+	sockets->fd = fd;
+    }
+
     if (fd < 0)
         goto fail;
 
@@ -2168,7 +2182,7 @@  static CharDriverState *qemu_chr_open_tc
 
     s->connected = 0;
     s->fd = -1;
-    s->listen_fd = -1;
+    s->listen_fds = NULL;
     s->msgfd = -1;
     s->is_unix = is_unix;
     s->do_nodelay = do_nodelay && !is_unix;
@@ -2179,8 +2193,11 @@  static CharDriverState *qemu_chr_open_tc
     chr->get_msgfd = tcp_get_msgfd;
 
     if (is_listen) {
-        s->listen_fd = fd;
-        qemu_set_fd_handler(s->listen_fd, tcp_chr_accept, NULL, chr);
+        s->listen_fds = sockets;
+	for (fdl = sockets; fdl != NULL; fdl = fdl->next) {
+	    fdl->opaque = chr;
+	    qemu_set_fd_handler(fdl->fd, tcp_chr_accept, NULL, fdl);
+	}
         if (is_telnet)
             s->do_telnetopt = 1;
     } else {
@@ -2191,16 +2208,21 @@  static CharDriverState *qemu_chr_open_tc
     }
 
     if (is_listen && is_waitconnect) {
+	FdList *fdl;
         printf("QEMU waiting for connection on: %s\n",
                chr->filename ? chr->filename : host_str);
         tcp_chr_accept(chr);
-        socket_set_nonblock(s->listen_fd);
+	for (fdl = s->listen_fds; fdl != NULL; fdl = fdl->next)
+	    socket_set_nonblock(fdl->fd);
     }
 
     return chr;
  fail:
-    if (fd >= 0)
-        closesocket(fd);
+    for (fdl = sockets; fdl != NULL; fdl = fdtmp) {
+	fdtmp = fdl->next;
+	closesocket(fdl->fd);
+	free(fdl);
+    }
     qemu_free(s);
     qemu_free(chr);
     return NULL;
Index: qemu-sockets.c
===================================================================
--- qemu-sockets.c.orig
+++ qemu-sockets.c
@@ -89,7 +89,7 @@  static void inet_print_addrinfo(const ch
 }
 
 int inet_listen(const char *str, char *ostr, int olen,
-                int socktype, int port_offset)
+                int socktype, int port_offset, FdList **sockets)
 {
     struct addrinfo ai,*res,*e;
     char addr[64];
@@ -97,8 +97,11 @@  int inet_listen(const char *str, char *o
     char uaddr[INET6_ADDRSTRLEN+1];
     char uport[33];
     const char *opts, *h;
-    int slisten,rc,pos,to,try_next;
+    int slisten,rc,pos,to,try_next,portno;
+    int retval = 0;
+    FdList *fdcur = NULL, *fdnew;
 
+    *sockets = NULL;
     memset(&ai,0, sizeof(ai));
     ai.ai_flags = AI_PASSIVE | AI_ADDRCONFIG;
     ai.ai_family = PF_UNSPEC;
@@ -174,9 +177,9 @@  int inet_listen(const char *str, char *o
         setsockopt(slisten,SOL_SOCKET,SO_REUSEADDR,(void*)&on,sizeof(on));
 #ifdef IPV6_V6ONLY
         if (e->ai_family == PF_INET6) {
-            /* listen on both ipv4 and ipv6 */
-            setsockopt(slisten,IPPROTO_IPV6,IPV6_V6ONLY,(void*)&off,
-                sizeof(off));
+            /* listen on IPv6 only, IPv4 is handled by a separate socket */
+            setsockopt(slisten,IPPROTO_IPV6,IPV6_V6ONLY,(void*)&on,
+                sizeof(on));
         }
 #endif
 
@@ -184,8 +187,22 @@  int inet_listen(const char *str, char *o
             if (bind(slisten, e->ai_addr, e->ai_addrlen) == 0) {
                 if (sockets_debug)
                     fprintf(stderr,"%s: bind(%s,%s,%d): OK\n", __FUNCTION__,
-                        inet_strfamily(e->ai_family), uaddr, inet_getport(e));
-                goto listen;
+			    inet_strfamily(e->ai_family), uaddr, inet_getport(e));
+		if (listen(slisten,1) != 0) {
+		    closesocket(slisten);
+		} else {
+		    fdnew = malloc(sizeof(*fdnew));
+		    fdnew->fd = slisten;
+		    fdnew->next = NULL;
+		    if (fdcur == NULL) {
+			*sockets = fdnew;
+		    } else {
+			fdcur->next = fdnew;
+		    }
+		    fdcur = fdnew;
+		    portno = inet_getport(e);
+		}
+		break;
             }
             try_next = to && (inet_getport(e) <= to + port_offset);
             if (!try_next || sockets_debug)
@@ -196,32 +213,18 @@  int inet_listen(const char *str, char *o
                 inet_setport(e, inet_getport(e) + 1);
                 continue;
             }
+	    closesocket(slisten);
             break;
         }
-        closesocket(slisten);
     }
-    fprintf(stderr, "%s: FAILED\n", __FUNCTION__);
-    freeaddrinfo(res);
-    return -1;
-
-listen:
-    if (listen(slisten,1) != 0) {
-        perror("listen");
-        closesocket(slisten);
-        freeaddrinfo(res);
-        return -1;
-    }
-    if (ostr) {
-        if (e->ai_family == PF_INET6) {
-            snprintf(ostr, olen, "[%s]:%d%s", uaddr,
-                     inet_getport(e) - port_offset, opts);
-        } else {
-            snprintf(ostr, olen, "%s:%d%s", uaddr,
-                     inet_getport(e) - port_offset, opts);
-        }
+    if (fdcur == NULL) {
+	fprintf(stderr, "%s: FAILED\n", __FUNCTION__);
+	retval = -1;
+    } else if (ostr) {
+	snprintf(ostr, olen, "%s:%d%s", addr, portno - port_offset, opts);
     }
     freeaddrinfo(res);
-    return slisten;
+    return retval;
 }
 
 int inet_connect(const char *str, int socktype)
Index: qemu_socket.h
===================================================================
--- qemu_socket.h.orig
+++ qemu_socket.h
@@ -29,13 +29,20 @@  int inet_aton(const char *cp, struct in_
 
 #endif /* !_WIN32 */
 
+struct FdList;
+typedef struct FdList {
+    int fd;
+    void *opaque;
+    struct FdList *next;
+} FdList;
+
 /* misc helpers */
 void socket_set_nonblock(int fd);
 int send_all(int fd, const void *buf, int len1);
 
 /* New, ipv6-ready socket helper functions, see qemu-sockets.c */
 int inet_listen(const char *str, char *ostr, int olen,
-                int socktype, int port_offset);
+                int socktype, int port_offset, FdList **sockets);
 int inet_connect(const char *str, int socktype);
 
 int unix_listen(const char *path, char *ostr, int olen);
Index: vnc.c
===================================================================
--- vnc.c.orig
+++ vnc.c
@@ -26,7 +26,6 @@ 
 
 #include "vnc.h"
 #include "sysemu.h"
-#include "qemu_socket.h"
 #include "qemu-timer.h"
 #include "acl.h"
 
@@ -181,15 +180,18 @@  void do_info_vnc(Monitor *mon)
     if (vnc_display == NULL || vnc_display->display == NULL) {
         monitor_printf(mon, "Server: disabled\n");
     } else {
-        char *serverAddr = vnc_socket_local_addr("     address: %s:%s\n",
-                                                 vnc_display->lsock);
-
-        if (!serverAddr)
-            return;
-
+	FdList *fdl;
+        char *serverAddr;
+	
         monitor_printf(mon, "Server:\n");
-        monitor_printf(mon, "%s", serverAddr);
-        free(serverAddr);
+	for (fdl  = vnc_display->lsocks; fdl != NULL; fdl = fdl->next) {
+	    serverAddr = vnc_socket_local_addr("     address: %s:%s\n",
+					       fdl->fd);
+	    if (!serverAddr)
+		continue;
+	    monitor_printf(mon, "%s", serverAddr);
+	    free(serverAddr);
+	}
         monitor_printf(mon, "        auth: %s\n", vnc_auth_name(vnc_display));
 
         if (vnc_display->clients) {
@@ -2113,14 +2115,15 @@  static void vnc_connect(VncDisplay *vd,
 
 static void vnc_listen_read(void *opaque)
 {
-    VncDisplay *vs = opaque;
+    FdList *fds = opaque;
+    VncDisplay *vs = fds->opaque;
     struct sockaddr_in addr;
     socklen_t addrlen = sizeof(addr);
 
     /* Catch-up */
     vga_hw_update();
 
-    int csock = accept(vs->lsock, (struct sockaddr *)&addr, &addrlen);
+    int csock = accept(fds->fd, (struct sockaddr *)&addr, &addrlen);
     if (csock != -1) {
         vnc_connect(vs, csock);
     }
@@ -2136,7 +2139,7 @@  void vnc_display_init(DisplayState *ds)
     dcl->idle = 1;
     vnc_display = vs;
 
-    vs->lsock = -1;
+    vs->lsocks = NULL;
 
     vs->ds = ds;
 
@@ -2159,6 +2162,7 @@  void vnc_display_init(DisplayState *ds)
 void vnc_display_close(DisplayState *ds)
 {
     VncDisplay *vs = ds ? (VncDisplay *)ds->opaque : vnc_display;
+    FdList *fds, *fdtmp;
 
     if (!vs)
         return;
@@ -2166,11 +2170,13 @@  void vnc_display_close(DisplayState *ds)
         qemu_free(vs->display);
         vs->display = NULL;
     }
-    if (vs->lsock != -1) {
-        qemu_set_fd_handler2(vs->lsock, NULL, NULL, NULL, NULL);
-        close(vs->lsock);
-        vs->lsock = -1;
+    for (fds = vs->lsocks; fds != NULL; fds = fdtmp) {
+	fdtmp = fds->next;
+	qemu_set_fd_handler2(fds->fd, NULL, NULL, NULL, NULL);
+	close(fds->fd);
+	free(fds);
     }
+    vs->lsocks = NULL;
     vs->auth = VNC_AUTH_INVALID;
 #ifdef CONFIG_VNC_TLS
     vs->subauth = VNC_AUTH_INVALID;
@@ -2207,7 +2213,8 @@  char *vnc_display_local_addr(DisplayStat
 {
     VncDisplay *vs = ds ? (VncDisplay *)ds->opaque : vnc_display;
     
-    return vnc_socket_local_addr("%s:%s", vs->lsock);
+    /* FIXME: should return all addresses */
+    return vnc_socket_local_addr("%s:%s", vs->lsocks->fd);
 }
 
 int vnc_display_open(DisplayState *ds, const char *display)
@@ -2225,7 +2232,8 @@  int vnc_display_open(DisplayState *ds, c
     int saslErr;
 #endif
     int acl = 0;
-
+    FdList *socks = NULL, *fds;
+    
     if (!vnc_display)
         return -1;
     vnc_display_close(ds);
@@ -2391,39 +2399,52 @@  int vnc_display_open(DisplayState *ds, c
 #endif
 
     if (reverse) {
+	int sock;
         /* connect to viewer */
         if (strncmp(display, "unix:", 5) == 0)
-            vs->lsock = unix_connect(display+5);
+            sock = unix_connect(display+5);
         else
-            vs->lsock = inet_connect(display, SOCK_STREAM);
-        if (-1 == vs->lsock) {
+            sock = inet_connect(display, SOCK_STREAM);
+        if (-1 == sock) {
             free(vs->display);
             vs->display = NULL;
             return -1;
         } else {
-            int csock = vs->lsock;
-            vs->lsock = -1;
-            vnc_connect(vs, csock);
+            vnc_connect(vs, sock);
         }
         return 0;
 
     } else {
         /* listen for connects */
         char *dpy;
+	int fd;
+
         dpy = qemu_malloc(256);
         if (strncmp(display, "unix:", 5) == 0) {
             pstrcpy(dpy, 256, "unix:");
-            vs->lsock = unix_listen(display+5, dpy+5, 256-5);
+            fd = unix_listen(display+5, dpy+5, 256-5);
         } else {
-            vs->lsock = inet_listen(display, dpy, 256, SOCK_STREAM, 5900);
+            fd = inet_listen(display, dpy, 256, SOCK_STREAM, 5900,
+			     &socks);
         }
-        if (-1 == vs->lsock) {
+	
+        if (-1 == fd) {
             free(dpy);
             return -1;
         } else {
+	    if (socks == NULL) {
+		socks = malloc(sizeof(*socks));
+		socks->fd = fd;
+		socks->next = NULL;
+	    }
+	    vs->lsocks = socks;
             free(vs->display);
             vs->display = dpy;
         }
     }
-    return qemu_set_fd_handler2(vs->lsock, NULL, vnc_listen_read, NULL, vs);
+    for (fds = socks; fds != NULL; fds = fds->next) {
+	fds->opaque = vs;
+	qemu_set_fd_handler2(fds->fd, NULL, vnc_listen_read, NULL, fds);
+    }
+    return 0;
 }
Index: vnc.h
===================================================================
--- vnc.h.orig
+++ vnc.h
@@ -28,6 +28,7 @@ 
 #define __QEMU_VNC_H
 
 #include "qemu-common.h"
+#include "qemu_socket.h"
 #include "console.h"
 #include "monitor.h"
 #include "audio/audio.h"
@@ -87,7 +88,7 @@  typedef struct VncDisplay VncDisplay;
 
 struct VncDisplay
 {
-    int lsock;
+    FdList *lsocks;
     DisplayState *ds;
     VncState *clients;
     kbd_layout_t *kbd_layout;