diff mbox

[net,v2] net: sctp: wake up all assocs if sndbuf policy is per socket

Message ID 1396970773-17500-1-git-send-email-dborkman@redhat.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Daniel Borkmann April 8, 2014, 3:26 p.m. UTC
SCTP charges chunks for wmem accounting via skb->truesize in
sctp_set_owner_w(), and sctp_wfree() respectively as the
reverse operation. If a sender runs out of wmem, it needs to
wait via sctp_wait_for_sndbuf(), and gets woken up by a call
to __sctp_write_space() mostly via sctp_wfree().

__sctp_write_space() is being called per association. Although
we assign sk->sk_write_space() to sctp_write_space(), which
is then being done per socket, it is only used if send space
is increased per socket option (SO_SNDBUF), as SOCK_USE_WRITE_QUEUE
is set and therefore not invoked in sock_wfree().

Commit 4c3a5bdae293 ("sctp: Don't charge for data in sndbuf
again when transmitting packet") fixed an issue where in case
sctp_packet_transmit() manages to queue up more than sndbuf
bytes, sctp_wait_for_sndbuf() will never be woken up again
unless it is interrupted by a signal. However, a still
remaining issue is that if net.sctp.sndbuf_policy=0, that is
accounting per socket, and one-to-many sockets are in use,
the reclaimed write space from sctp_wfree() is 'unfairly'
handed back on the server to the association that is the lucky
one to be woken up again via __sctp_write_space(), while
the remaining associations are never be woken up again
(unless by a signal).

The effect disappears with net.sctp.sndbuf_policy=1, that
is wmem accounting per association, as it guarantees a fair
share of wmem among associations.

Therefore, if we have reclaimed memory in case of per socket
accounting, wake all related associations to a socket in a
fair manner, that is, traverse the socket association list
starting from the current neighbour of the association and
issue a __sctp_write_space() to everyone until we end up
waking ourselves. This guarantees that no association is
preferred over another and even if more associations are
taken into the one-to-many session, all receivers will get
messages from the server and are not stalled forever on
high load. This setting still leaves the advantage of per
socket accounting in touch as an association can still use
up global limits if unused by others.

Fixes: 4eb701dfc618 ("[SCTP] Fix SCTP sendbuffer accouting.")
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Thomas Graf <tgraf@suug.ch>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Vlad Yasevich <vyasevic@redhat.com>
---
 [ When net-next opens up again, we need to think how
   we can ideally make a new list interface and simplify
   both open-coded list traversals. ]

 v1->v2:
  - Improved comment, included note about locking

 net/sctp/socket.c | 36 +++++++++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

Comments

Vlad Yasevich April 8, 2014, 3:28 p.m. UTC | #1
On 04/08/2014 11:26 AM, Daniel Borkmann wrote:
> SCTP charges chunks for wmem accounting via skb->truesize in
> sctp_set_owner_w(), and sctp_wfree() respectively as the
> reverse operation. If a sender runs out of wmem, it needs to
> wait via sctp_wait_for_sndbuf(), and gets woken up by a call
> to __sctp_write_space() mostly via sctp_wfree().
> 
> __sctp_write_space() is being called per association. Although
> we assign sk->sk_write_space() to sctp_write_space(), which
> is then being done per socket, it is only used if send space
> is increased per socket option (SO_SNDBUF), as SOCK_USE_WRITE_QUEUE
> is set and therefore not invoked in sock_wfree().
> 
> Commit 4c3a5bdae293 ("sctp: Don't charge for data in sndbuf
> again when transmitting packet") fixed an issue where in case
> sctp_packet_transmit() manages to queue up more than sndbuf
> bytes, sctp_wait_for_sndbuf() will never be woken up again
> unless it is interrupted by a signal. However, a still
> remaining issue is that if net.sctp.sndbuf_policy=0, that is
> accounting per socket, and one-to-many sockets are in use,
> the reclaimed write space from sctp_wfree() is 'unfairly'
> handed back on the server to the association that is the lucky
> one to be woken up again via __sctp_write_space(), while
> the remaining associations are never be woken up again
> (unless by a signal).
> 
> The effect disappears with net.sctp.sndbuf_policy=1, that
> is wmem accounting per association, as it guarantees a fair
> share of wmem among associations.
> 
> Therefore, if we have reclaimed memory in case of per socket
> accounting, wake all related associations to a socket in a
> fair manner, that is, traverse the socket association list
> starting from the current neighbour of the association and
> issue a __sctp_write_space() to everyone until we end up
> waking ourselves. This guarantees that no association is
> preferred over another and even if more associations are
> taken into the one-to-many session, all receivers will get
> messages from the server and are not stalled forever on
> high load. This setting still leaves the advantage of per
> socket accounting in touch as an association can still use
> up global limits if unused by others.
> 
> Fixes: 4eb701dfc618 ("[SCTP] Fix SCTP sendbuffer accouting.")
> Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
> Cc: Thomas Graf <tgraf@suug.ch>
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: Vlad Yasevich <vyasevic@redhat.com>

Acked-by: Vlad Yasevich <vyasevic@redhat.com>

-vlad

> ---
>  [ When net-next opens up again, we need to think how
>    we can ideally make a new list interface and simplify
>    both open-coded list traversals. ]
> 
>  v1->v2:
>   - Improved comment, included note about locking
> 
>  net/sctp/socket.c | 36 +++++++++++++++++++++++++++++++++++-
>  1 file changed, 35 insertions(+), 1 deletion(-)
> 
> diff --git a/net/sctp/socket.c b/net/sctp/socket.c
> index 981aaf8..96cfaba 100644
> --- a/net/sctp/socket.c
> +++ b/net/sctp/socket.c
> @@ -6593,6 +6593,40 @@ static void __sctp_write_space(struct sctp_association *asoc)
>  	}
>  }
>  
> +static void sctp_wake_up_waiters(struct sock *sk,
> +				 struct sctp_association *asoc)
> +{
> +	struct sctp_association *tmp = asoc;
> +
> +	/* We do accounting for the sndbuf space per association,
> +	 * so we only need to wake our own association.
> +	 */
> +	if (asoc->ep->sndbuf_policy)
> +		return __sctp_write_space(asoc);
> +
> +	/* Accounting for the sndbuf space is per socket, so we
> +	 * need to wake up others, try to be fair and in case of
> +	 * other associations, let them have a go first instead
> +	 * of just doing a sctp_write_space() call.
> +	 *
> +	 * Note that we reach sctp_wake_up_waiters() only when
> +	 * associations free up queued chunks, thus we are under
> +	 * lock and the list of associations on a socket is
> +	 * guaranteed not to change.
> +	 */
> +	for (tmp = list_next_entry(tmp, asocs); 1;
> +	     tmp = list_next_entry(tmp, asocs)) {
> +		/* Manually skip the head element. */
> +		if (&tmp->asocs == &((sctp_sk(sk))->ep->asocs))
> +			continue;
> +		/* Wake up association. */
> +		__sctp_write_space(tmp);
> +		/* We've reached the end. */
> +		if (tmp == asoc)
> +			break;
> +	}
> +}
> +
>  /* Do accounting for the sndbuf space.
>   * Decrement the used sndbuf space of the corresponding association by the
>   * data size which was just transmitted(freed).
> @@ -6620,7 +6654,7 @@ static void sctp_wfree(struct sk_buff *skb)
>  	sk_mem_uncharge(sk, skb->truesize);
>  
>  	sock_wfree(skb);
> -	__sctp_write_space(asoc);
> +	sctp_wake_up_waiters(sk, asoc);
>  
>  	sctp_association_put(asoc);
>  }
> 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Neil Horman April 8, 2014, 3:37 p.m. UTC | #2
On Tue, Apr 08, 2014 at 05:26:13PM +0200, Daniel Borkmann wrote:
> SCTP charges chunks for wmem accounting via skb->truesize in
> sctp_set_owner_w(), and sctp_wfree() respectively as the
> reverse operation. If a sender runs out of wmem, it needs to
> wait via sctp_wait_for_sndbuf(), and gets woken up by a call
> to __sctp_write_space() mostly via sctp_wfree().
> 
> __sctp_write_space() is being called per association. Although
> we assign sk->sk_write_space() to sctp_write_space(), which
> is then being done per socket, it is only used if send space
> is increased per socket option (SO_SNDBUF), as SOCK_USE_WRITE_QUEUE
> is set and therefore not invoked in sock_wfree().
> 
> Commit 4c3a5bdae293 ("sctp: Don't charge for data in sndbuf
> again when transmitting packet") fixed an issue where in case
> sctp_packet_transmit() manages to queue up more than sndbuf
> bytes, sctp_wait_for_sndbuf() will never be woken up again
> unless it is interrupted by a signal. However, a still
> remaining issue is that if net.sctp.sndbuf_policy=0, that is
> accounting per socket, and one-to-many sockets are in use,
> the reclaimed write space from sctp_wfree() is 'unfairly'
> handed back on the server to the association that is the lucky
> one to be woken up again via __sctp_write_space(), while
> the remaining associations are never be woken up again
> (unless by a signal).
> 
> The effect disappears with net.sctp.sndbuf_policy=1, that
> is wmem accounting per association, as it guarantees a fair
> share of wmem among associations.
> 
> Therefore, if we have reclaimed memory in case of per socket
> accounting, wake all related associations to a socket in a
> fair manner, that is, traverse the socket association list
> starting from the current neighbour of the association and
> issue a __sctp_write_space() to everyone until we end up
> waking ourselves. This guarantees that no association is
> preferred over another and even if more associations are
> taken into the one-to-many session, all receivers will get
> messages from the server and are not stalled forever on
> high load. This setting still leaves the advantage of per
> socket accounting in touch as an association can still use
> up global limits if unused by others.
> 
> Fixes: 4eb701dfc618 ("[SCTP] Fix SCTP sendbuffer accouting.")
> Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
> Cc: Thomas Graf <tgraf@suug.ch>
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: Vlad Yasevich <vyasevic@redhat.com>
> ---
>  [ When net-next opens up again, we need to think how
>    we can ideally make a new list interface and simplify
>    both open-coded list traversals. ]
> 
>  v1->v2:
>   - Improved comment, included note about locking
> 
>  net/sctp/socket.c | 36 +++++++++++++++++++++++++++++++++++-
>  1 file changed, 35 insertions(+), 1 deletion(-)
> 
> diff --git a/net/sctp/socket.c b/net/sctp/socket.c
> index 981aaf8..96cfaba 100644
> --- a/net/sctp/socket.c
> +++ b/net/sctp/socket.c
> @@ -6593,6 +6593,40 @@ static void __sctp_write_space(struct sctp_association *asoc)
>  	}
>  }
>  
> +static void sctp_wake_up_waiters(struct sock *sk,
> +				 struct sctp_association *asoc)
> +{
> +	struct sctp_association *tmp = asoc;
> +
> +	/* We do accounting for the sndbuf space per association,
> +	 * so we only need to wake our own association.
> +	 */
> +	if (asoc->ep->sndbuf_policy)
> +		return __sctp_write_space(asoc);
> +
> +	/* Accounting for the sndbuf space is per socket, so we
> +	 * need to wake up others, try to be fair and in case of
> +	 * other associations, let them have a go first instead
> +	 * of just doing a sctp_write_space() call.
> +	 *
> +	 * Note that we reach sctp_wake_up_waiters() only when
> +	 * associations free up queued chunks, thus we are under
> +	 * lock and the list of associations on a socket is
> +	 * guaranteed not to change.
> +	 */
> +	for (tmp = list_next_entry(tmp, asocs); 1;
> +	     tmp = list_next_entry(tmp, asocs)) {
> +		/* Manually skip the head element. */
> +		if (&tmp->asocs == &((sctp_sk(sk))->ep->asocs))
> +			continue;
> +		/* Wake up association. */
> +		__sctp_write_space(tmp);
> +		/* We've reached the end. */
> +		if (tmp == asoc)
> +			break;
> +	}
> +}
> +
>  /* Do accounting for the sndbuf space.
>   * Decrement the used sndbuf space of the corresponding association by the
>   * data size which was just transmitted(freed).
> @@ -6620,7 +6654,7 @@ static void sctp_wfree(struct sk_buff *skb)
>  	sk_mem_uncharge(sk, skb->truesize);
>  
>  	sock_wfree(skb);
> -	__sctp_write_space(asoc);
> +	sctp_wake_up_waiters(sk, asoc);
>  
>  	sctp_association_put(asoc);
>  }
> -- 
> 1.7.11.7
> 
> 
Acked-by: Neil Horman <nhorman@tuxdriver.com>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 8, 2014, 4:57 p.m. UTC | #3
Hm, I still found an issue, please hold on with this one, thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 8, 2014, 5:08 p.m. UTC | #4
From: Daniel Borkmann <dborkman@redhat.com>
Date: Tue,  8 Apr 2014 17:26:13 +0200

> SCTP charges chunks for wmem accounting via skb->truesize in
> sctp_set_owner_w(), and sctp_wfree() respectively as the
> reverse operation. If a sender runs out of wmem, it needs to
> wait via sctp_wait_for_sndbuf(), and gets woken up by a call
> to __sctp_write_space() mostly via sctp_wfree().
> 
> __sctp_write_space() is being called per association. Although
> we assign sk->sk_write_space() to sctp_write_space(), which
> is then being done per socket, it is only used if send space
> is increased per socket option (SO_SNDBUF), as SOCK_USE_WRITE_QUEUE
> is set and therefore not invoked in sock_wfree().
> 
> Commit 4c3a5bdae293 ("sctp: Don't charge for data in sndbuf
> again when transmitting packet") fixed an issue where in case
> sctp_packet_transmit() manages to queue up more than sndbuf
> bytes, sctp_wait_for_sndbuf() will never be woken up again
> unless it is interrupted by a signal. However, a still
> remaining issue is that if net.sctp.sndbuf_policy=0, that is
> accounting per socket, and one-to-many sockets are in use,
> the reclaimed write space from sctp_wfree() is 'unfairly'
> handed back on the server to the association that is the lucky
> one to be woken up again via __sctp_write_space(), while
> the remaining associations are never be woken up again
> (unless by a signal).
> 
> The effect disappears with net.sctp.sndbuf_policy=1, that
> is wmem accounting per association, as it guarantees a fair
> share of wmem among associations.
> 
> Therefore, if we have reclaimed memory in case of per socket
> accounting, wake all related associations to a socket in a
> fair manner, that is, traverse the socket association list
> starting from the current neighbour of the association and
> issue a __sctp_write_space() to everyone until we end up
> waking ourselves. This guarantees that no association is
> preferred over another and even if more associations are
> taken into the one-to-many session, all receivers will get
> messages from the server and are not stalled forever on
> high load. This setting still leaves the advantage of per
> socket accounting in touch as an association can still use
> up global limits if unused by others.
> 
> Fixes: 4eb701dfc618 ("[SCTP] Fix SCTP sendbuffer accouting.")
> Signed-off-by: Daniel Borkmann <dborkman@redhat.com>

Applied and queued up for -stable, thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 8, 2014, 5:18 p.m. UTC | #5
From: Daniel Borkmann <dborkman@redhat.com>
Date: Tue, 08 Apr 2014 18:57:06 +0200

> Hm, I still found an issue, please hold on with this one, thanks.

Ugh too late I pushed it out, you'll need to send a relative fix.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 8, 2014, 5:20 p.m. UTC | #6
On 04/08/2014 07:18 PM, David Miller wrote:
> From: Daniel Borkmann <dborkman@redhat.com>
> Date: Tue, 08 Apr 2014 18:57:06 +0200
>
>> Hm, I still found an issue, please hold on with this one, thanks.
>
> Ugh too late I pushed it out, you'll need to send a relative fix.

No problem, will do, thanks. I suggest to take it out from
the stable queue for now. Sorry for any inconvenience.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 8, 2014, 6:19 p.m. UTC | #7
From: Daniel Borkmann <dborkman@redhat.com>
Date: Tue, 08 Apr 2014 19:20:16 +0200

> On 04/08/2014 07:18 PM, David Miller wrote:
>> From: Daniel Borkmann <dborkman@redhat.com>
>> Date: Tue, 08 Apr 2014 18:57:06 +0200
>>
>>> Hm, I still found an issue, please hold on with this one, thanks.
>>
>> Ugh too late I pushed it out, you'll need to send a relative fix.
> 
> No problem, will do, thanks. I suggest to take it out from
> the stable queue for now. Sorry for any inconvenience.

I'm going to keep it in there and add the fixup when you submit it,
I'll make a mental note not to apply it until the fixup is in too.

I always re-read all of the discussion in the patchwork entry for
a patch when I backport it to -stable, so it's very unlikely that
I will accidently apply it without the fixup.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 8, 2014, 6:46 p.m. UTC | #8
Daniel and Vlad, I'm about to send Linus a pull request.

I know that you still need to fixup this SCTP change and it'll be
in there, but I really need to get the changes in my tree staged
so that I can do a set of -stable submissions.

So please don't freak out, I know that this change still needs work
and shouldn't go to -stable just yet :-)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 8, 2014, 7:04 p.m. UTC | #9
On 04/08/2014 08:46 PM, David Miller wrote:
>
> Daniel and Vlad, I'm about to send Linus a pull request.
>
> I know that you still need to fixup this SCTP change and it'll be
> in there, but I really need to get the changes in my tree staged
> so that I can do a set of -stable submissions.
>
> So please don't freak out, I know that this change still needs work
> and shouldn't go to -stable just yet :-)

Noted, thanks. I think the issue is that in sctp_association_free()
we do a list_del(&asoc->asocs) and then flush sctp_outq_free() which
will then access on sctp_wfree() a poisoned entry. I think this
should be list_del_init() instead.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Vlad Yasevich April 8, 2014, 7:37 p.m. UTC | #10
On 04/08/2014 03:04 PM, Daniel Borkmann wrote:
> On 04/08/2014 08:46 PM, David Miller wrote:
>>
>> Daniel and Vlad, I'm about to send Linus a pull request.
>>
>> I know that you still need to fixup this SCTP change and it'll be
>> in there, but I really need to get the changes in my tree staged
>> so that I can do a set of -stable submissions.
>>
>> So please don't freak out, I know that this change still needs work
>> and shouldn't go to -stable just yet :-)
> 
> Noted, thanks. I think the issue is that in sctp_association_free()
> we do a list_del(&asoc->asocs) and then flush sctp_outq_free() which
> will then access on sctp_wfree() a poisoned entry. I think this
> should be list_del_init() instead.

Switching to list_del_init() will solve the crash, but will not address
the issue.  You've just removed an association and need to notify others
of available space.  You can't do that since you've been unlinked.

We either need a rcu_style unlink, or detect the delete case and loop
from the beginning.

You can do #2 easily enough by looking at asoc->base.dead to decide
where to start looping.

-vlad
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 8, 2014, 8:50 p.m. UTC | #11
On 04/08/2014 09:37 PM, Vlad Yasevich wrote:
> On 04/08/2014 03:04 PM, Daniel Borkmann wrote:
>> On 04/08/2014 08:46 PM, David Miller wrote:
>>>
>>> Daniel and Vlad, I'm about to send Linus a pull request.
>>>
>>> I know that you still need to fixup this SCTP change and it'll be
>>> in there, but I really need to get the changes in my tree staged
>>> so that I can do a set of -stable submissions.
>>>
>>> So please don't freak out, I know that this change still needs work
>>> and shouldn't go to -stable just yet :-)
>>
>> Noted, thanks. I think the issue is that in sctp_association_free()
>> we do a list_del(&asoc->asocs) and then flush sctp_outq_free() which
>> will then access on sctp_wfree() a poisoned entry. I think this
>> should be list_del_init() instead.
>
> Switching to list_del_init() will solve the crash, but will not address
> the issue.  You've just removed an association and need to notify others
> of available space.  You can't do that since you've been unlinked.
>
> We either need a rcu_style unlink, or detect the delete case and loop
> from the beginning.
>
> You can do #2 easily enough by looking at asoc->base.dead to decide
> where to start looping.

Agreed, I think #2 is better, so we can simply call and return with
sctp_write_space() if we see that the assoc is dead; I think SCTP is
doing too much deferring to RCU anyway. ;)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 981aaf8..96cfaba 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -6593,6 +6593,40 @@  static void __sctp_write_space(struct sctp_association *asoc)
 	}
 }
 
+static void sctp_wake_up_waiters(struct sock *sk,
+				 struct sctp_association *asoc)
+{
+	struct sctp_association *tmp = asoc;
+
+	/* We do accounting for the sndbuf space per association,
+	 * so we only need to wake our own association.
+	 */
+	if (asoc->ep->sndbuf_policy)
+		return __sctp_write_space(asoc);
+
+	/* Accounting for the sndbuf space is per socket, so we
+	 * need to wake up others, try to be fair and in case of
+	 * other associations, let them have a go first instead
+	 * of just doing a sctp_write_space() call.
+	 *
+	 * Note that we reach sctp_wake_up_waiters() only when
+	 * associations free up queued chunks, thus we are under
+	 * lock and the list of associations on a socket is
+	 * guaranteed not to change.
+	 */
+	for (tmp = list_next_entry(tmp, asocs); 1;
+	     tmp = list_next_entry(tmp, asocs)) {
+		/* Manually skip the head element. */
+		if (&tmp->asocs == &((sctp_sk(sk))->ep->asocs))
+			continue;
+		/* Wake up association. */
+		__sctp_write_space(tmp);
+		/* We've reached the end. */
+		if (tmp == asoc)
+			break;
+	}
+}
+
 /* Do accounting for the sndbuf space.
  * Decrement the used sndbuf space of the corresponding association by the
  * data size which was just transmitted(freed).
@@ -6620,7 +6654,7 @@  static void sctp_wfree(struct sk_buff *skb)
 	sk_mem_uncharge(sk, skb->truesize);
 
 	sock_wfree(skb);
-	__sctp_write_space(asoc);
+	sctp_wake_up_waiters(sk, asoc);
 
 	sctp_association_put(asoc);
 }