diff mbox series

[OpenWrt-Devel,uhttpd] client: allow keep-alive for POST requests

Message ID 20200313122024.707576-1-jo@mein.io
State Accepted
Headers show
Series [OpenWrt-Devel,uhttpd] client: allow keep-alive for POST requests | expand

Commit Message

Jo-Philipp Wich March 13, 2020, 12:20 p.m. UTC
Allow POST requests via persistent connections to improve performance
especially when using HTTPS on older devices.

After this change, average page load times in LuCI improve significantly
once the TLS connections are initiated.

When testing an ar71xx 19.07.2 build on an ethernet connected TL-WR1043nd
using luci-ssl-openssl and the ustream-openssl backend, the average page
load time for the main status page decreased to 1.3s compared to 4.7s
before, the interface and wireless configuration pages loaded in 1.2s
seconds each compared to the 4.2s and 4.9s respectively before.

Signed-off-by: Jo-Philipp Wich <jo@mein.io>
---
 client.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Wes Turner March 13, 2020, 4:01 p.m. UTC | #1
Are there *new* security implications of allowing keep-alive?

Slowloris DoS comes to mind:
https://en.wikipedia.org/wiki/Slowloris_(computer_security)

And the article mentions a number of tools.

Older devices are likely somewhat trivially DoS-able without this patch;
but maybe include a config option to disable keep-alive?

What happens to RAM and CPU usage when there are multiple tabs open with
keep-alive on?

On Fri, Mar 13, 2020, 8:20 AM Jo-Philipp Wich <jo@mein.io> wrote:

> Allow POST requests via persistent connections to improve performance
> especially when using HTTPS on older devices.
>
> After this change, average page load times in LuCI improve significantly
> once the TLS connections are initiated.
>
> When testing an ar71xx 19.07.2 build on an ethernet connected TL-WR1043nd
> using luci-ssl-openssl and the ustream-openssl backend, the average page
> load time for the main status page decreased to 1.3s compared to 4.7s
> before, the interface and wireless configuration pages loaded in 1.2s
> seconds each compared to the 4.2s and 4.9s respectively before.
>
> Signed-off-by: Jo-Philipp Wich <jo@mein.io>
> ---
>  client.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/client.c b/client.c
> index 92f7609..2a2393f 100644
> --- a/client.c
> +++ b/client.c
> @@ -194,8 +194,7 @@ static int client_parse_request(struct client *cl,
> char *data)
>
>         req->method = h_method;
>         req->version = h_version;
> -       if (req->version < UH_HTTP_VER_1_1 || req->method ==
> UH_HTTP_MSG_POST ||
> -           !conf.http_keepalive)
> +       if (req->version < UH_HTTP_VER_1_1 || !conf.http_keepalive)
>                 req->connection_close = true;
>
>         return CLIENT_STATE_HEADER;
> --
> 2.25.1
>
>
> _______________________________________________
> openwrt-devel mailing list
> openwrt-devel@lists.openwrt.org
> https://lists.openwrt.org/mailman/listinfo/openwrt-devel
>
Jo-Philipp Wich March 13, 2020, 4:39 p.m. UTC | #2
Hi Wes,

> Are there *new* security implications of allowing keep-alive?

I don't see any immediate concerns. You can trigger resource intensive calls
via GET, HEAD, PATCH, PUT or DELETE as well, all of them were allowed for
keep-alive previously, only POST was filtered for unknown reasons.

> Slowloris DoS comes to mind:
> https://en.wikipedia.org/wiki/Slowloris_(computer_security)

The DoS susceptibility should be same with or without this patch.

> Older devices are likely somewhat trivially DoS-able without this patch; but
> maybe include a config option to disable keep-alive?

Disabling keep-alive has always been supported, either use
uci set uhttpd.main.http_keepalive=0 or alternatively uhttpd -k 0

> What happens to RAM and CPU usage when there are multiple tabs open with
> keep-alive on?

Testing with 6 open browser tabs, all refreshing the main status page:

With POST keep-alive:    uhttpd VSZ 4316K, CPU 5% usr, 6% sys
Without POST keep-alive: uhttpd VSZ 4756K, CPU 11% usr, 8% sys

Thats not a scientific benchmark though. In general you trade CPU (TLS setup,
TCP connects) for memory (resident established connections).

In case of non-malicious clients (like normal browser tabs background
refreshing data) the memory resource consumption seems to be even lower
because there's less active TLS sessions at every point in time. And almost no
TLS connection attempts once all requires sessions are established.


Regards,
Jo
Wes Turner March 13, 2020, 5:42 p.m. UTC | #3
On Fri, Mar 13, 2020, 12:39 PM Jo-Philipp Wich <jo@mein.io> wrote:

> Hi Wes,
>
> > Are there *new* security implications of allowing keep-alive?
>
> I don't see any immediate concerns. You can trigger resource intensive
> calls
> via GET, HEAD, PATCH, PUT or DELETE as well, all of them were allowed for
> keep-alive previously, only POST was filtered for unknown reasons.
>
> > Slowloris DoS comes to mind:
> > https://en.wikipedia.org/wiki/Slowloris_(computer_security)
>
> The DoS susceptibility should be same with or without this patch.
>
> > Older devices are likely somewhat trivially DoS-able without this patch;
> but
> > maybe include a config option to disable keep-alive?
>


> Disabling keep-alive has always been supported, either use
> uci set uhttpd.main.http_keepalive=0 or alternatively uhttpd -k 0
>

Thanks


> > What happens to RAM and CPU usage when there are multiple tabs open with
> > keep-alive on?
>
> Testing with 6 open browser tabs, all refreshing the main status page:
>
> With POST keep-alive:    uhttpd VSZ 4316K, CPU 5% usr, 6% sys
> Without POST keep-alive: uhttpd VSZ 4756K, CPU 11% usr, 8% sys
>
> Thats not a scientific benchmark though. In general you trade CPU (TLS
> setup,
> TCP connects) for memory (resident established connections).
>
> In case of non-malicious clients (like normal browser tabs background
> refreshing data) the memory resource consumption seems to be even lower
> because there's less active TLS sessions at every point in time. And
> almost no
> TLS connection attempts once all requires sessions are established.
>

That sounds ideal. Is this with or without the "[OpenWrt-Devel] [PATCH
ustream-ssl] ustream-openssl: clear error stack before SSL_read/SSL_write"
patch?
Jo-Philipp Wich March 13, 2020, 9:51 p.m. UTC | #4
Hi Wes,

> That sounds ideal. Is this with or without the "[OpenWrt-Devel] [PATCH
> ustream-ssl] ustream-openssl: clear error stack before SSL_read/SSL_write" patch?

it is including the error stack patch. Without it, I wasn't even able to fully
load the page most of the time.

Regards,
Jo
diff mbox series

Patch

diff --git a/client.c b/client.c
index 92f7609..2a2393f 100644
--- a/client.c
+++ b/client.c
@@ -194,8 +194,7 @@  static int client_parse_request(struct client *cl, char *data)
 
 	req->method = h_method;
 	req->version = h_version;
-	if (req->version < UH_HTTP_VER_1_1 || req->method == UH_HTTP_MSG_POST ||
-	    !conf.http_keepalive)
+	if (req->version < UH_HTTP_VER_1_1 || !conf.http_keepalive)
 		req->connection_close = true;
 
 	return CLIENT_STATE_HEADER;