Patchwork [RFC,v3,11/21] virtproxy: add handler for control packet

login
register
mail settings
Submitter Michael Roth
Date Nov. 16, 2010, 1:16 a.m.
Message ID <1289870175-14880-12-git-send-email-mdroth@linux.vnet.ibm.com>
Download mbox | patch
Permalink /patch/71338/
State New
Headers show

Comments

Michael Roth - Nov. 16, 2010, 1:16 a.m.
Process control packets coming in over the channel. This entails setting
up/tearing down connections to local services initiated from the other
end of the channel.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 virtproxy.c |  154 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 154 insertions(+), 0 deletions(-)
Jes Sorensen - Nov. 18, 2010, 11:35 a.m.
On 11/16/10 02:16, Michael Roth wrote:
> Process control packets coming in over the channel. This entails setting
> up/tearing down connections to local services initiated from the other
> end of the channel.
> 
> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
> ---
>  virtproxy.c |  154 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 154 insertions(+), 0 deletions(-)

[snip]

> +
> +        qemu_opts_print(iforward->socket_opts, NULL);
> +        if (qemu_opt_get(iforward->socket_opts, "host") != NULL) {
> +            server_fd = inet_connect_opts(iforward->socket_opts);
> +        } else if (qemu_opt_get(iforward->socket_opts, "path") != NULL) {
> +            server_fd = unix_connect_opts(iforward->socket_opts);
> +        } else {
> +            LOG("unable to find listening socket host/addr info");
> +            return -1;
> +        }

This patch is a perfect example of why -1 as an error message is suboptimal.

> +        closesocket(fd);
> +        vp_set_fd_handler(fd, NULL, NULL, conn);
> +        QLIST_REMOVE(conn, next);
> +        qemu_free(conn);
> +        break;
> +    }
> +    }

You should never have two closing braces in the same column like this -
something is wrong with the formatting.

Cheers,
Jes
Michael Roth - Nov. 18, 2010, 4:18 p.m.
On 11/18/2010 05:35 AM, Jes Sorensen wrote:
> On 11/16/10 02:16, Michael Roth wrote:
>> Process control packets coming in over the channel. This entails setting
>> up/tearing down connections to local services initiated from the other
>> end of the channel.
>>
>> Signed-off-by: Michael Roth<mdroth@linux.vnet.ibm.com>
>> ---
>>   virtproxy.c |  154 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   1 files changed, 154 insertions(+), 0 deletions(-)
>
> [snip]
>
>> +
>> +        qemu_opts_print(iforward->socket_opts, NULL);
>> +        if (qemu_opt_get(iforward->socket_opts, "host") != NULL) {
>> +            server_fd = inet_connect_opts(iforward->socket_opts);
>> +        } else if (qemu_opt_get(iforward->socket_opts, "path") != NULL) {
>> +            server_fd = unix_connect_opts(iforward->socket_opts);
>> +        } else {
>> +            LOG("unable to find listening socket host/addr info");
>> +            return -1;
>> +        }
>
> This patch is a perfect example of why -1 as an error message is suboptimal.
>
>> +        closesocket(fd);
>> +        vp_set_fd_handler(fd, NULL, NULL, conn);
>> +        QLIST_REMOVE(conn, next);
>> +        qemu_free(conn);
>> +        break;
>> +    }
>> +    }
>
> You should never have two closing braces in the same column like this -
> something is wrong with the formatting.

That's from using a block for the case
switch () {
case: {
     ...
}
}

Alternative is to indent the "case:", which is "right" I think, but 
aligning those with switch() seems to be pretty standard to conserve space.

>
> Cheers,
> Jes
Jes Sorensen - Nov. 18, 2010, 4:22 p.m.
On 11/18/10 17:18, Michael Roth wrote:
> On 11/18/2010 05:35 AM, Jes Sorensen wrote:
>> You should never have two closing braces in the same column like this -
>> something is wrong with the formatting.
> 
> That's from using a block for the case
> switch () {
> case: {
>     ...
> }
> }
> 
> Alternative is to indent the "case:", which is "right" I think, but
> aligning those with switch() seems to be pretty standard to conserve space.

Why do you need braces around the case: { } ? That is not normally used
throughout the code.

Cheers,
Jes
Michael Roth - Nov. 18, 2010, 4:50 p.m.
On 11/18/2010 10:22 AM, Jes Sorensen wrote:
> On 11/18/10 17:18, Michael Roth wrote:
>> On 11/18/2010 05:35 AM, Jes Sorensen wrote:
>>> You should never have two closing braces in the same column like this -
>>> something is wrong with the formatting.
>>
>> That's from using a block for the case
>> switch () {
>> case: {
>>      ...
>> }
>> }
>>
>> Alternative is to indent the "case:", which is "right" I think, but
>> aligning those with switch() seems to be pretty standard to conserve space.
>
> Why do you need braces around the case: { } ? That is not normally used
> throughout the code.
>

I use them here to avoid declaring a bunch of variables may not be used 
depending on the case. Although, it might cleaner to use separate 
functions here instead.

> Cheers,
> Jes

Patch

diff --git a/virtproxy.c b/virtproxy.c
index a0bbe7f..0cc8950 100644
--- a/virtproxy.c
+++ b/virtproxy.c
@@ -268,6 +268,160 @@  static void vp_channel_accept(void *opaque)
     vp_set_fd_handler(drv->listen_fd, NULL, NULL, NULL);
 }
 
+/* handle control packets
+ *
+ * process VPPackets containing control messages
+ */
+static int vp_handle_control_packet(VPDriver *drv, const VPPacket *pkt)
+{
+    const VPControlMsg *msg = &pkt->payload.msg;
+    int ret;
+
+    TRACE("called with drv: %p", drv);
+
+    switch (msg->type) {
+    case VP_CONTROL_CONNECT_INIT: {
+        int client_fd = msg->args.connect_init.client_fd;
+        int server_fd;
+        char service_id[VP_SERVICE_ID_LEN];
+        VPPacket resp_pkt;
+        VPConn *new_conn;
+        VPIForward *iforward;
+
+        pstrcpy(service_id, VP_SERVICE_ID_LEN,
+                 msg->args.connect_init.service_id);
+        TRACE("setting up connection for service id %s", service_id);
+
+        /* create server connection on behalf of remote end */
+        iforward = get_iforward(drv, service_id);
+        if (iforward == NULL) {
+            LOG("no forwarder configured for service id");
+            return -1;
+        }
+
+        qemu_opts_print(iforward->socket_opts, NULL);
+        if (qemu_opt_get(iforward->socket_opts, "host") != NULL) {
+            server_fd = inet_connect_opts(iforward->socket_opts);
+        } else if (qemu_opt_get(iforward->socket_opts, "path") != NULL) {
+            server_fd = unix_connect_opts(iforward->socket_opts);
+        } else {
+            LOG("unable to find listening socket host/addr info");
+            return -1;
+        }
+
+        if (server_fd == -1) {
+            LOG("failed to create connection to service with id %s",
+                service_id);
+        }
+        TRACE("server_fd: %d", server_fd);
+
+        new_conn = qemu_mallocz(sizeof(VPConn));
+        if (!new_conn) {
+            LOG("memory allocation failed");
+            return -1;
+        }
+
+        /* send a connect_ack back over the channel */
+        /* TODO: all fields should be explicitly set so we shouldn't
+         * need to memset. this might hurt if we beef up VPPacket size
+         */
+        memset(&resp_pkt, 0, sizeof(resp_pkt));
+        resp_pkt.type = VP_PKT_CONTROL;
+        resp_pkt.payload.msg.type = VP_CONTROL_CONNECT_ACK;
+        resp_pkt.payload.msg.args.connect_ack.server_fd = server_fd;
+        resp_pkt.payload.msg.args.connect_ack.client_fd = client_fd;
+        resp_pkt.magic = VP_MAGIC;
+
+        /* TODO: can this potentially block or cause a deadlock with
+         * the remote end? need to look into potentially buffering these
+         * if it looks like the remote end is waiting for us to read data
+         * off the channel.
+         */
+        if (!drv->chr && drv->channel_fd == -1) {
+            TRACE("channel no longer connected, ignoring packet");
+            return -1;
+        }
+
+        ret = vp_channel_send_all(drv, (void *)&resp_pkt, sizeof(resp_pkt));
+        if (ret == -1) {
+            LOG("error sending data over channel");
+            return -1;
+        }
+        if (ret != sizeof(resp_pkt)) {
+            TRACE("buffer full? %d bytes remaining", ret);
+            return -1;
+        }
+
+        /* add new VPConn to list and set a read handler for it */
+        new_conn->drv = drv;
+        new_conn->client_fd = client_fd;
+        new_conn->server_fd = server_fd;
+        new_conn->type = VP_CONN_SERVER;
+        new_conn->state = VP_STATE_CONNECTED;
+        QLIST_INSERT_HEAD(&drv->conns, new_conn, next);
+        vp_set_fd_handler(server_fd, vp_conn_read, NULL, new_conn);
+
+        break;
+    }
+    case VP_CONTROL_CONNECT_ACK: {
+        int client_fd = msg->args.connect_ack.client_fd;
+        int server_fd = msg->args.connect_ack.server_fd;
+        VPConn *conn;
+
+        TRACE("recieved ack from remote end for client fd %d", client_fd);
+
+        if (server_fd <= 0) {
+            LOG("remote end sent invalid server fd");
+            return -1;
+        }
+
+        conn = get_conn(drv, client_fd, true);
+
+        if (conn == NULL) {
+            LOG("failed to find connection with client_fd %d", client_fd);
+            return -1;
+        }
+
+        conn->server_fd = server_fd;
+        conn->state = VP_STATE_CONNECTED;
+        vp_set_fd_handler(client_fd, vp_conn_read, NULL, conn);
+
+        break;
+    }
+    case VP_CONTROL_CLOSE: {
+        int fd;
+        VPConn *conn;
+
+        TRACE("closing connection on behalf of remote end");
+
+        if (msg->args.close.client_fd >= 0) {
+            fd = msg->args.close.client_fd;
+            TRACE("recieved close msg from remote end for client fd %d", fd);
+            conn = get_conn(drv, fd, true);
+        } else if (msg->args.close.server_fd >= 0) {
+            fd = msg->args.close.server_fd;
+            TRACE("recieved close msg from remote end for server fd %d", fd);
+            conn = get_conn(drv, fd, false);
+        } else {
+            LOG("invalid fd");
+            return -1;
+        }
+
+        if (conn == NULL) {
+            LOG("failed to find conn with specified fd %d", fd);
+            return -1;
+        }
+
+        closesocket(fd);
+        vp_set_fd_handler(fd, NULL, NULL, conn);
+        QLIST_REMOVE(conn, next);
+        qemu_free(conn);
+        break;
+    }
+    }
+    return 0;
+}
+
 /* handle data packets
  *
  * process VPPackets containing data and send them to the corresponding