From patchwork Tue Jul 5 13:13:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 644756 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3rkPg72c6gz9sCj for ; Tue, 5 Jul 2016 23:21:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755246AbcGENNx (ORCPT ); Tue, 5 Jul 2016 09:13:53 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49283 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755135AbcGENNS (ORCPT ); Tue, 5 Jul 2016 09:13:18 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D3DE480E6A; Tue, 5 Jul 2016 13:13:17 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-116-58.phx2.redhat.com [10.3.116.58]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u65DDF1I022321; Tue, 5 Jul 2016 09:13:16 -0400 Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 09/24] rxrpc: Move usage count getting into rxrpc_queue_conn() From: David Howells To: davem@davemloft.net Cc: dhowells@redhat.com, netdev@vger.kernel.org, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 05 Jul 2016 14:13:15 +0100 Message-ID: <146772439548.21657.7491204679832471597.stgit@warthog.procyon.org.uk> In-Reply-To: <146772433082.21657.14046392058484946464.stgit@warthog.procyon.org.uk> References: <146772433082.21657.14046392058484946464.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 05 Jul 2016 13:13:17 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Rather than calling rxrpc_get_connection() manually before calling rxrpc_queue_conn(), do it inside the queue wrapper. This allows us to do some important fixes: (1) If the usage count is 0, do nothing. This prevents connections from being reanimated once they're dead. (2) If rxrpc_queue_work() fails because the work item is already queued, retract the usage count increment which would otherwise be lost. (3) Don't take a ref on the connection in the work function. By passing the ref through the work item, this is unnecessary. Doing it in the work function is too late anyway. Previously, connection-directed packets held a ref on the connection, but that's not really the best idea. And another useful changes: (*) Don't need to take a refcount on the connection in the data_ready handler unless we invoke the connection's work item. We're using RCU there so that's otherwise redundant. Signed-off-by: David Howells --- net/rxrpc/ar-internal.h | 9 ++++++++- net/rxrpc/call_accept.c | 1 - net/rxrpc/conn_event.c | 8 +------- net/rxrpc/input.c | 1 - 4 files changed, 9 insertions(+), 10 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 2a8710dac3c5..d43cb7831693 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -585,10 +585,17 @@ static inline void rxrpc_get_connection(struct rxrpc_connection *conn) atomic_inc(&conn->usage); } +static inline +struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *conn) +{ + return atomic_inc_not_zero(&conn->usage) ? conn : NULL; +} static inline void rxrpc_queue_conn(struct rxrpc_connection *conn) { - rxrpc_queue_work(&conn->processor); + if (rxrpc_get_connection_maybe(conn) && + !rxrpc_queue_work(&conn->processor)) + rxrpc_put_connection(conn); } /* diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 1c0860df150e..5367dbe9b96f 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -132,7 +132,6 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local, _debug("await conn sec"); list_add_tail(&call->accept_link, &rx->secureq); call->conn->state = RXRPC_CONN_SERVICE_CHALLENGING; - rxrpc_get_connection(call->conn); set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events); rxrpc_queue_conn(call->conn); } else { diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index b9c39b83eddb..9ceddd3fd5db 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -266,12 +266,8 @@ void rxrpc_process_connection(struct work_struct *work) _enter("{%d}", conn->debug_id); - rxrpc_get_connection(conn); - - if (test_and_clear_bit(RXRPC_CONN_EV_CHALLENGE, &conn->events)) { + if (test_and_clear_bit(RXRPC_CONN_EV_CHALLENGE, &conn->events)) rxrpc_secure_connection(conn); - rxrpc_put_connection(conn); - } /* go through the conn-level event packets, releasing the ref on this * connection that each one has when we've finished with it */ @@ -286,7 +282,6 @@ void rxrpc_process_connection(struct work_struct *work) goto requeue_and_leave; case -ECONNABORTED: default: - rxrpc_put_connection(conn); rxrpc_free_skb(skb); break; } @@ -304,7 +299,6 @@ requeue_and_leave: protocol_error: if (rxrpc_abort_connection(conn, -ret, abort_code) < 0) goto requeue_and_leave; - rxrpc_put_connection(conn); rxrpc_free_skb(skb); _leave(" [EPROTO]"); goto out; diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index fe7ff339d7e5..b993f2dc5a09 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -580,7 +580,6 @@ static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, { _enter("%p,%p", conn, skb); - rxrpc_get_connection(conn); skb_queue_tail(&conn->rx_queue, skb); rxrpc_queue_conn(conn); }