From patchwork Mon Feb 13 10:13:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: BOICHUK Taras X-Patchwork-Id: 1741573 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.openwrt.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=openwrt-devel-bounces+incoming=patchwork.ozlabs.org@lists.openwrt.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=DIcccdWA; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4PFgHh0Hlrz23h0 for ; Mon, 13 Feb 2023 21:17:03 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type:List-Help: Reply-To:List-Archive:List-Unsubscribe:List-Subscribe:From:List-Post:List-Id: Message-ID:MIME-Version:Date:Subject:To:Cc:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=tmGat0GK3NY8SZEJViNbv1NH6ty734JYAD8OCYo4FPE=; b=DIcccdWA24d+QY688sTp2IYdnT V1I0NUwcK2M4nxVQrmtcjgAWvaTCEHxnFjQr5d77vhSoLGroioGZkDB8gDjK2vs9ZCghJ35EfqCZh QNLmDMvdsbMqXdi3+VYv4ooxM/47Eq8GudHvzscCZMRT4isp4hla7sjROvWSnl6PJHfFc4OOxcvpO 7MDvgGw4nfetDCmS50fe60d342tReQ400zxE8PnDT4FaEr9jvDtM/AazJafgzDrxfMcSA/FnaoNVR FOldxcgNNV5D4rwDa/sgqW3/ROSbRrOIjzcDTRV2gYQ76DZiiI7TkyREVD9X1UrE2uyHGlsPlIfGc jXczLjWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRVqS-00E3bL-43; Mon, 13 Feb 2023 10:14:00 +0000 To: "openwrt-devel@lists.openwrt.org" Subject: [PATCH] ubus: added ubus_handle_events function that "guaranties" execution of all polled events Date: Mon, 13 Feb 2023 10:13:49 +0000 MIME-Version: 1.0 Message-ID: List-Id: OpenWrt Development List List-Post: X-Patchwork-Original-From: BOICHUK Taras via openwrt-devel From: BOICHUK Taras Precedence: list X-Mailman-Version: 2.1.34 X-BeenThere: openwrt-devel@lists.openwrt.org List-Subscribe: , List-Unsubscribe: , List-Archive: Reply-To: BOICHUK Taras List-Help: Sender: "openwrt-devel" Errors-To: openwrt-devel-bounces+incoming=patchwork.ozlabs.org@lists.openwrt.org The sender domain has a DMARC Reject/Quarantine policy which disallows sending mailing list messages using the original "From" header. To mitigate this problem, the original message has been wrapped automatically by the mailing list software. In case of previous setup or calling flow ctx->cancel_poll is set to true function ubus_handle_event may process ONLY ONE request, though the comment says it processes events: /* call this for read events on ctx->sock.fd when not using uloop */ static inline void ubus_handle_event(struct ubus_context *ctx) { ctx->sock.cb(&ctx->sock, ULOOP_READ); } In case if I would manually poll the ubus fd and do not use uloop to poll it and after that it may process ONE event and the rest will be processed on the next loop cycle. I would like to have a function that guarantees that every request will be processed in a single call to ubus_haubus_handle_eventndle_event. I suggest creating another function that will do this: /* call this for read ALL events on ctx->sock.fd when not using uloop * after polling fd manually */ static inline void ubus_handle_events(struct ubus_context *ctx) { ctx->cancel_poll = false; ctx->sock.cb(&ctx->sock, ULOOP_READ); } This idea comes from existing not exposed functionality of the function that does the polling by itself and then processes all messages: void __hidden ubus_poll_data(struct ubus_context *ctx, int timeout) { struct pollfd pfd = { .fd = ctx->sock.fd, .events = POLLIN | POLLERR, }; ctx->cancel_poll = false; poll(&pfd, 1, timeout ? timeout : -1); ubus_handle_data(&ctx->sock, ULOOP_READ); } Instead of letting it do the polling we will do it our-self. Please, review and advice. A few words regrading the intention of this add-on: I am using the system that has a few different fds that are being polled and events on each are processed. Sometimes there are more work on the fds, so processing ubus events take a lot of time with ubus_handle_event. Additionally due to previous code that sets up ubus object ctx->cancel_poll becomes true. diff --git a/libubus.h b/libubus.h index dc42ea7..b4b7140 100644 --- a/libubus.h +++ b/libubus.h @@ -260,6 +260,15 @@ static inline void ubus_add_uloop(struct ubus_context *ctx) uloop_fd_add(&ctx->sock, ULOOP_BLOCKING | ULOOP_READ); } +/* call this for read ALL events on ctx->sock.fd when not using uloop + * after polling fd manually + */ +static inline void ubus_handle_events(struct ubus_context *ctx) +{ + ctx->cancel_poll = false; + ctx->sock.cb(&ctx->sock, ULOOP_READ); +} + /* call this for read events on ctx->sock.fd when not using uloop */ static inline void ubus_handle_event(struct ubus_context *ctx) { @@ -380,7 +389,7 @@ static inline int ubus_request_get_caller_fd(struct ubus_request_data *req) { int fd = req->req_fd; req->req_fd = -1; - + return fd; }