From patchwork Mon Mar 21 08:34:36 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin Chary X-Patchwork-Id: 87717 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 5BDC4B6F72 for ; Mon, 21 Mar 2011 19:36:17 +1100 (EST) Received: from localhost ([127.0.0.1]:45836 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Q1aay-0002fj-0T for incoming@patchwork.ozlabs.org; Mon, 21 Mar 2011 04:36:12 -0400 Received: from [140.186.70.92] (port=52988 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Q1aZg-0002WP-5q for qemu-devel@nongnu.org; Mon, 21 Mar 2011 04:34:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Q1aZd-0007QS-3v for qemu-devel@nongnu.org; Mon, 21 Mar 2011 04:34:50 -0400 Received: from smtp5.tech.numericable.fr ([82.216.111.41]:46740) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Q1aZc-0007Py-Ro for qemu-devel@nongnu.org; Mon, 21 Mar 2011 04:34:49 -0400 Received: from localhost.localdomain (ip-138.net-82-216-39.nice.rev.numericable.fr [82.216.39.138]) by smtp5.tech.numericable.fr (Postfix) with ESMTP id 47F34124013; Mon, 21 Mar 2011 09:34:48 +0100 (CET) From: Corentin Chary To: anthony@codemonkey.ws Date: Mon, 21 Mar 2011 09:34:36 +0100 Message-Id: <1300696478-6051-3-git-send-email-corentin.chary@gmail.com> X-Mailer: git-send-email 1.7.3.4 In-Reply-To: <1300696478-6051-1-git-send-email-corentin.chary@gmail.com> References: <1300696478-6051-1-git-send-email-corentin.chary@gmail.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6, seldom 2.4 (older, 4) X-Received-From: 82.216.111.41 Cc: Blue Swirl , qemu-devel , Corentin Chary , Paolo Bonzini Subject: [Qemu-devel] [PATCH 2/4] vnc: don't mess up with iohandlers in the vnc thread X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The threaded VNC servers messed up with QEMU fd handlers without any kind of locking, and that can cause some nasty race conditions. Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(), which will wait for the current job queue to finish, can be called with the iothread lock held. Instead, we now store the data in a temporary buffer, and use a bottom half to notify the main thread that new data is available. vnc_[un]lock_ouput() is still needed to access VncState members like abort, csock or jobs_buffer. Signed-off-by: Corentin Chary --- ui/vnc-jobs-async.c | 48 +++++++++++++++++++++++++++++------------------- ui/vnc-jobs.h | 1 + ui/vnc.c | 12 ++++++++++++ ui/vnc.h | 2 ++ 4 files changed, 44 insertions(+), 19 deletions(-) diff --git a/ui/vnc-jobs-async.c b/ui/vnc-jobs-async.c index 1dfa6c3..4daf4b4 100644 --- a/ui/vnc-jobs-async.c +++ b/ui/vnc-jobs-async.c @@ -28,6 +28,7 @@ #include "vnc.h" #include "vnc-jobs.h" +#include "qemu_socket.h" /* * Locking: @@ -155,6 +156,24 @@ void vnc_jobs_join(VncState *vs) qemu_cond_wait(&queue->cond, &queue->mutex); } vnc_unlock_queue(queue); + vnc_jobs_consume_buffer(vs); +} + +void vnc_jobs_consume_buffer(VncState *vs) +{ + bool flush; + + vnc_lock_output(vs); + if (vs->jobs_buffer.offset) { + vnc_write(vs, vs->jobs_buffer.buffer, vs->jobs_buffer.offset); + buffer_reset(&vs->jobs_buffer); + } + flush = vs->csock != -1 && vs->abort != true; + vnc_unlock_output(vs); + + if (flush) { + vnc_flush(vs); + } } /* @@ -197,7 +216,6 @@ static int vnc_worker_thread_loop(VncJobQueue *queue) VncState vs; int n_rectangles; int saved_offset; - bool flush; vnc_lock_queue(queue); while (QTAILQ_EMPTY(&queue->jobs) && !queue->exit) { @@ -213,6 +231,7 @@ static int vnc_worker_thread_loop(VncJobQueue *queue) vnc_lock_output(job->vs); if (job->vs->csock == -1 || job->vs->abort == true) { + vnc_unlock_output(job->vs); goto disconnected; } vnc_unlock_output(job->vs); @@ -233,10 +252,6 @@ static int vnc_worker_thread_loop(VncJobQueue *queue) if (job->vs->csock == -1) { vnc_unlock_display(job->vs->vd); - /* output mutex must be locked before going to - * disconnected: - */ - vnc_lock_output(job->vs); goto disconnected; } @@ -254,24 +269,19 @@ static int vnc_worker_thread_loop(VncJobQueue *queue) vs.output.buffer[saved_offset] = (n_rectangles >> 8) & 0xFF; vs.output.buffer[saved_offset + 1] = n_rectangles & 0xFF; - /* Switch back buffers */ vnc_lock_output(job->vs); - if (job->vs->csock == -1) { - goto disconnected; + if (job->vs->csock != -1) { + buffer_reserve(&job->vs->jobs_buffer, vs.output.offset); + buffer_append(&job->vs->jobs_buffer, vs.output.buffer, + vs.output.offset); + /* Copy persistent encoding data */ + vnc_async_encoding_end(job->vs, &vs); + + qemu_bh_schedule(job->vs->bh); } - - vnc_write(job->vs, vs.output.buffer, vs.output.offset); - -disconnected: - /* Copy persistent encoding data */ - vnc_async_encoding_end(job->vs, &vs); - flush = (job->vs->csock != -1 && job->vs->abort != true); vnc_unlock_output(job->vs); - if (flush) { - vnc_flush(job->vs); - } - +disconnected: vnc_lock_queue(queue); QTAILQ_REMOVE(&queue->jobs, job, next); vnc_unlock_queue(queue); diff --git a/ui/vnc-jobs.h b/ui/vnc-jobs.h index b8dab81..4c661f9 100644 --- a/ui/vnc-jobs.h +++ b/ui/vnc-jobs.h @@ -40,6 +40,7 @@ void vnc_jobs_join(VncState *vs); #ifdef CONFIG_VNC_THREAD +void vnc_jobs_consume_buffer(VncState *vs); void vnc_start_worker_thread(void); bool vnc_worker_thread_running(void); void vnc_stop_worker_thread(void); diff --git a/ui/vnc.c b/ui/vnc.c index 1b68965..5d4beaa 100644 --- a/ui/vnc.c +++ b/ui/vnc.c @@ -1011,7 +1011,10 @@ static void vnc_disconnect_finish(VncState *vs) #ifdef CONFIG_VNC_THREAD qemu_mutex_destroy(&vs->output_mutex); + qemu_bh_delete(vs->bh); + buffer_free(&vs->jobs_buffer); #endif + for (i = 0; i < VNC_STAT_ROWS; ++i) { qemu_free(vs->lossy_rect[i]); } @@ -1226,6 +1229,14 @@ static long vnc_client_read_plain(VncState *vs) return ret; } +#ifdef CONFIG_VNC_THREAD +static void vnc_jobs_bh(void *opaque) +{ + VncState *vs = opaque; + + vnc_jobs_consume_buffer(vs); +} +#endif /* * First function called whenever there is more data to be read from @@ -2521,6 +2532,7 @@ static void vnc_connect(VncDisplay *vd, int csock) #ifdef CONFIG_VNC_THREAD qemu_mutex_init(&vs->output_mutex); + vs->bh = qemu_bh_new(vnc_jobs_bh, vs); #endif QTAILQ_INSERT_HEAD(&vd->clients, vs, next); diff --git a/ui/vnc.h b/ui/vnc.h index f10c5dc..5599a4d 100644 --- a/ui/vnc.h +++ b/ui/vnc.h @@ -286,6 +286,8 @@ struct VncState VncJob job; #else QemuMutex output_mutex; + QEMUBH *bh; + Buffer jobs_buffer; #endif /* Encoding specific, if you add something here, don't forget to