From patchwork Thu Jun 6 06:25:49 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fam Zheng X-Patchwork-Id: 249282 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id D19182C02EB for ; Thu, 6 Jun 2013 16:29:10 +1000 (EST) Received: from localhost ([::1]:37251 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UkTh6-0004dD-L1 for incoming@patchwork.ozlabs.org; Thu, 06 Jun 2013 02:29:08 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41598) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UkTf5-0001UB-5A for qemu-devel@nongnu.org; Thu, 06 Jun 2013 02:27:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UkTey-0004Ny-Aq for qemu-devel@nongnu.org; Thu, 06 Jun 2013 02:27:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49144) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UkTey-0004Ns-3y for qemu-devel@nongnu.org; Thu, 06 Jun 2013 02:26:56 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r566Qt0P023487 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 6 Jun 2013 02:26:55 -0400 Received: from localhost.nay.redhat.com ([10.66.7.14]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r566Qh5J009870; Thu, 6 Jun 2013 02:26:53 -0400 From: Fam Zheng To: qemu-devel@nongnu.org Date: Thu, 6 Jun 2013 14:25:49 +0800 Message-Id: <1370499959-8916-4-git-send-email-famz@redhat.com> In-Reply-To: <1370499959-8916-1-git-send-email-famz@redhat.com> References: <1370499959-8916-1-git-send-email-famz@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: kwolf@redhat.com, jcody@redhat.com, Fam Zheng , rjones@redhat.com, stefanha@redhat.com Subject: [Qemu-devel] [PATCH v7 03/13] curl: change curl_multi_do to curl_fd_handler X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The driver calls curl_multi_do to take action at several points, while it's also registered as socket fd handler. This patch removes internal call of curl_multi_do because they are not necessary when handler can be called by socket data update. Since curl_multi_do becomes a pure fd handler, the function is renamed. It takes a pointer to CURLSockInfo now, instead of pointer to BDRVCURLState. Signed-off-by: Fam Zheng --- block/curl.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/block/curl.c b/block/curl.c index a11002b..82a7a30 100644 --- a/block/curl.c +++ b/block/curl.c @@ -92,7 +92,7 @@ typedef struct BDRVCURLState { } BDRVCURLState; static void curl_clean_state(CURLState *s); -static void curl_multi_do(void *arg); +static void curl_fd_handler(void *arg); static int curl_aio_flush(void *opaque); static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action, @@ -110,17 +110,23 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action, } switch (action) { case CURL_POLL_IN: - qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s); + qemu_aio_set_fd_handler(fd, curl_fd_handler, NULL, + curl_aio_flush, sock); + sock->action |= CURL_CSELECT_IN; break; case CURL_POLL_OUT: - qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s); + qemu_aio_set_fd_handler(fd, NULL, curl_fd_handler, curl_aio_flush, + sock); + sock->action |= CURL_CSELECT_OUT; break; case CURL_POLL_INOUT: - qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do, - curl_aio_flush, s); + qemu_aio_set_fd_handler(fd, curl_fd_handler, curl_fd_handler, + curl_aio_flush, sock); + sock->action |= CURL_CSELECT_IN | CURL_CSELECT_OUT; break; case CURL_POLL_REMOVE: qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL); + sock->action = 0; break; } @@ -226,9 +232,10 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len, return FIND_RET_NONE; } -static void curl_multi_do(void *arg) +static void curl_fd_handler(void *arg) { - BDRVCURLState *s = (BDRVCURLState *)arg; + CURLSockInfo *sock = (CURLSockInfo *)arg; + BDRVCURLState *s = sock->s; int running; int r; int msgs_in_queue; @@ -237,7 +244,9 @@ static void curl_multi_do(void *arg) return; do { - r = curl_multi_socket_all(s->multi, &running); + r = curl_multi_socket_action(s->multi, + sock->fd, sock->action, + &running); } while(r == CURLM_CALL_MULTI_PERFORM); /* Try to find done transfers, so we can free the easy @@ -302,7 +311,6 @@ static CURLState *curl_init_state(BDRVCURLState *s) } if (!state) { g_usleep(100); - curl_multi_do(s); } } while(!state); @@ -483,7 +491,6 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) s->multi = curl_multi_init(); curl_multi_setopt(s->multi, CURLMOPT_SOCKETDATA, s); curl_multi_setopt(s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb); - curl_multi_do(s); qemu_opts_del(opts); return 0; @@ -575,7 +582,6 @@ static void curl_readv_bh_cb(void *p) curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range); curl_multi_add_handle(s->multi, state->curl); - curl_multi_do(s); }