diff mbox series

ubus: added ubus_handle_events function that "guaranties" execution of all polled events

Message ID mailman.15162.1676283239.3064139.openwrt-devel@lists.openwrt.org
State New
Headers show
Series ubus: added ubus_handle_events function that "guaranties" execution of all polled events | expand

Commit Message

BOICHUK Taras Feb. 13, 2023, 10:13 a.m. UTC
The sender domain has a DMARC Reject/Quarantine policy which disallows
sending mailing list messages using the original "From" header.

To mitigate this problem, the original message has been wrapped
automatically by the mailing list software.
In case of previous setup or calling flow ctx->cancel_poll is set to true function ubus_handle_event may process ONLY ONE request, though the comment says it processes events:

/* call this for read events on ctx->sock.fd when not using uloop */
static inline void ubus_handle_event(struct ubus_context *ctx)
{
	ctx->sock.cb(&ctx->sock, ULOOP_READ);
}

In case if I would manually poll the ubus fd and do not use uloop to poll it and after that it may process ONE event and the rest will be processed on the next loop cycle. I would like to have a function that guarantees that every request will be processed in a single call to ubus_haubus_handle_eventndle_event.

I suggest creating another function that will do this:

/* call this for read ALL events on ctx->sock.fd when not using uloop
 * after polling fd manually
 */
static inline void ubus_handle_events(struct ubus_context *ctx)
{
	ctx->cancel_poll = false;
	ctx->sock.cb(&ctx->sock, ULOOP_READ);
}

This idea comes from existing not exposed functionality of the function that does the polling by itself and then processes all messages:

void __hidden ubus_poll_data(struct ubus_context *ctx, int timeout)
{
	struct pollfd pfd = {
		.fd = ctx->sock.fd,
		.events = POLLIN | POLLERR,
	};

	ctx->cancel_poll = false;
	poll(&pfd, 1, timeout ? timeout : -1);
	ubus_handle_data(&ctx->sock, ULOOP_READ);
}

Instead of letting it do the polling we will do it our-self.

Please, review and advice.

A few words regrading the intention of this add-on: 
I am using the system that has a few different fds that are being polled and events on each are processed.
Sometimes there are more work on the fds, so processing ubus events take a lot of time with ubus_handle_event.
Additionally due to previous code that sets up ubus object ctx->cancel_poll becomes true.

Comments

Jo-Philipp Wich Feb. 13, 2023, 1:12 p.m. UTC | #1
Hi,

> In case of previous setup or calling flow ctx->cancel_poll is set to true
> function ubus_handle_event may process ONLY ONE request, though the comment
> says it processes events:
> 
> /* call this for read events on ctx->sock.fd when not using uloop */ static
> inline void ubus_handle_event(struct ubus_context *ctx) { 
> ctx->sock.cb(&ctx->sock, ULOOP_READ); }
> 
> In case if I would manually poll the ubus fd and do not use uloop to poll
> it and after that it may process ONE event and the rest will be processed
> on the next loop cycle. I would like to have a function that guarantees
> that every request will be processed in a single call to
> ubus_haubus_handle_eventndle_event.

You're already using a foreign event loop / IO notification mechanism, you
already have means to determine socket read readiness. Invoking a library
function that does it's own polling internally with arbitrary, uncontrollable
timeouts does not seem like a good design.

It would be better to implement a function that simply keeps calling
`get_next_msg(ctx, &recv_fd)` and `ubus_process_msg(ctx, &ctx->msgbuf,
recv_fd);` until `get_next_msg()` yields false.

~ Jo
BOICHUK Taras Feb. 13, 2023, 2:11 p.m. UTC | #2
The sender domain has a DMARC Reject/Quarantine policy which disallows
sending mailing list messages using the original "From" header.

To mitigate this problem, the original message has been wrapped
automatically by the mailing list software.
Hi,

> You're already using a foreign event loop / IO notification mechanism, you
> already have means to determine socket read readiness. Invoking a library
> function that does it's own polling internally with arbitrary, uncontrollable
> timeouts does not seem like a good design.

Maybe, I don't get something right, but it seems like in my patch It does not
do any internal polling when calling ubus_handle_events. It will call ubus_handle_data
function, which is the command block ctx->sock.cb that will be executed with
ctx->cancel_poll set to false. The ctx->cancel_poll could be redundant in the loop,
because I don't see any way its state could be changed during the loop flow.

> It would be better to implement a function that simply keeps calling
> `get_next_msg(ctx, &recv_fd)` and `ubus_process_msg(ctx, &ctx->msgbuf,
> recv_fd);` until `get_next_msg()` yields false.

ubus_handle_data seems to do exactly what you've described, so is there any
other reason to implement separate function?
diff mbox series

Patch

diff --git a/libubus.h b/libubus.h
index dc42ea7..b4b7140 100644
--- a/libubus.h
+++ b/libubus.h
@@ -260,6 +260,15 @@  static inline void ubus_add_uloop(struct ubus_context *ctx)
 	uloop_fd_add(&ctx->sock, ULOOP_BLOCKING | ULOOP_READ);
 }
 
+/* call this for read ALL events on ctx->sock.fd when not using uloop
+ * after polling fd manually
+ */
+static inline void ubus_handle_events(struct ubus_context *ctx)
+{
+	ctx->cancel_poll = false;
+	ctx->sock.cb(&ctx->sock, ULOOP_READ);
+}
+
 /* call this for read events on ctx->sock.fd when not using uloop */
 static inline void ubus_handle_event(struct ubus_context *ctx)
 {
@@ -380,7 +389,7 @@  static inline int ubus_request_get_caller_fd(struct ubus_request_data *req)
 {
     int fd = req->req_fd;
     req->req_fd = -1;
-    
+
     return fd;
 }