From patchwork Wed Sep 23 16:15:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Raju X-Patchwork-Id: 521759 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (unknown [IPv6:2600:3c00::f03c:91ff:fe6e:bdf7]) by ozlabs.org (Postfix) with ESMTP id 37B761401AF for ; Thu, 24 Sep 2015 02:15:45 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 9750722C390; Wed, 23 Sep 2015 09:15:40 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v1.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 526A910B6C for ; Wed, 23 Sep 2015 09:15:38 -0700 (PDT) Received: from bar4.cudamail.com (bar2 [192.168.15.2]) by mx3v1.cudamail.com (Postfix) with ESMTP id BBB8E618971 for ; Wed, 23 Sep 2015 10:15:37 -0600 (MDT) X-ASG-Debug-ID: 1443024937-03dc216fbf1b440001-byXFYA Received: from mx3-pf2.cudamail.com ([192.168.14.1]) by bar4.cudamail.com with ESMTP id wEHE2mGAdvXalayR (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 23 Sep 2015 10:15:37 -0600 (MDT) X-Barracuda-Envelope-From: nithin@vmware.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.1 Received: from unknown (HELO smtp-outbound-1.vmware.com) (208.91.2.12) by mx3-pf2.cudamail.com with ESMTPS (DHE-RSA-AES256-SHA encrypted); 23 Sep 2015 16:15:34 -0000 Received-SPF: pass (mx3-pf2.cudamail.com: SPF record at _spf.vmware.com designates 208.91.2.12 as permitted sender) X-Barracuda-Apparent-Source-IP: 208.91.2.12 X-Barracuda-RBL-IP: 208.91.2.12 Received: from sc9-mailhost1.vmware.com (sc9-mailhost1.vmware.com [10.113.161.71]) by smtp-outbound-1.vmware.com (Postfix) with ESMTP id B0537286D3 for ; Wed, 23 Sep 2015 09:18:03 -0700 (PDT) Received: from pa-dbc1122.eng.vmware.com (unknown [10.162.210.22]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 2BF3718795; Wed, 23 Sep 2015 09:15:35 -0700 (PDT) X-CudaMail-Envelope-Sender: nithin@vmware.com From: Nithin Raju To: dev@openvswitch.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V2-922031197 X-CudaMail-DTE: 092315 X-CudaMail-Originating-IP: 208.91.2.12 Date: Wed, 23 Sep 2015 09:15:33 -0700 X-ASG-Orig-Subj: [##CM-V2-922031197##][PATCH 4/4 v2] netlink-socket.c: event polling for packets on windows Message-Id: <1443024933-47622-4-git-send-email-nithin@vmware.com> X-Mailer: git-send-email 1.8.5.6 In-Reply-To: <1443024933-47622-1-git-send-email-nithin@vmware.com> References: <1443024933-47622-1-git-send-email-nithin@vmware.com> X-Barracuda-Connect: UNKNOWN[192.168.14.1] X-Barracuda-Start-Time: 1443024937 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-ASG-Whitelist: EmailCat (corporate) Subject: [ovs-dev] [PATCH 4/4 v2] netlink-socket.c: event polling for packets on windows X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Currently, we do busy-polling for packets on Windows. In this patch we nuke that code and schedule an event. The code has been tested for packet reads, and CPU utilization of ovs-vswitchd went down drastically. I'll send out the changes to get vport events to work in a seperate patch. Signed-off-by: Nithin Raju Acked-by: Sairam Venugopal --- v2: collected ACks --- lib/netlink-socket.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/lib/netlink-socket.c b/lib/netlink-socket.c index 42eb232..35c115a 100644 --- a/lib/netlink-socket.c +++ b/lib/netlink-socket.c @@ -1178,10 +1178,17 @@ pend_io_request(struct nl_sock *sock) struct ovs_header *ovs_header; struct nlmsghdr *nlmsg; uint32_t seq; - int retval; + int retval = 0; int error; DWORD bytes; OVERLAPPED *overlapped = CONST_CAST(OVERLAPPED *, &sock->overlapped); + uint16_t cmd = OVS_CTRL_CMD_WIN_PEND_PACKET_REQ; + + ovs_assert(sock->read_ioctl == OVS_IOCTL_READ_PACKET || + sock->read_ioctl == OVS_IOCTL_READ_EVENT); + if (sock->read_ioctl == OVS_IOCTL_READ_EVENT) { + cmd = OVS_CTRL_CMD_WIN_PEND_REQ; + } int ovs_msg_size = sizeof (struct nlmsghdr) + sizeof (struct genlmsghdr) + sizeof (struct ovs_header); @@ -1190,7 +1197,7 @@ pend_io_request(struct nl_sock *sock) seq = nl_sock_allocate_seq(sock, 1); nl_msg_put_genlmsghdr(&request, 0, OVS_WIN_NL_CTRL_FAMILY_ID, 0, - OVS_CTRL_CMD_WIN_PEND_REQ, OVS_WIN_CONTROL_VERSION); + cmd, OVS_WIN_CONTROL_VERSION); nlmsg = nl_msg_nlmsghdr(&request); nlmsg->nlmsg_seq = seq; nlmsg->nlmsg_pid = sock->pid; @@ -1206,13 +1213,10 @@ pend_io_request(struct nl_sock *sock) if (error != ERROR_IO_INCOMPLETE && error != ERROR_IO_PENDING) { VLOG_ERR("nl_sock_wait failed - %s\n", ovs_format_message(error)); retval = EINVAL; - goto done; } } else { - /* The I/O was completed synchronously */ - poll_immediate_wake(); + retval = EAGAIN; } - retval = 0; done: ofpbuf_uninit(&request); @@ -1228,10 +1232,15 @@ nl_sock_wait(const struct nl_sock *sock, short int events) { #ifdef _WIN32 if (sock->overlapped.Internal != STATUS_PENDING) { - pend_io_request(CONST_CAST(struct nl_sock *, sock)); - /* XXX: poll_wevent_wait(sock->overlapped.hEvent); */ + int ret = pend_io_request(CONST_CAST(struct nl_sock *, sock)); + if (ret == 0) { + poll_wevent_wait(sock->overlapped.hEvent); + } else { + poll_immediate_wake(); + } + } else { + poll_wevent_wait(sock->overlapped.hEvent); } - poll_immediate_wake(); /* XXX: temporary. */ #else poll_fd_wait(sock->fd, events); #endif