From patchwork Sat Sep 9 05:50:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Witten X-Patchwork-Id: 811912 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="XfSUF2uj"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xq45r3xGnz9sRV for ; Sat, 9 Sep 2017 16:29:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752415AbdIIG2j (ORCPT ); Sat, 9 Sep 2017 02:28:39 -0400 Received: from mail-lf0-f67.google.com ([209.85.215.67]:37790 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750817AbdIIG2i (ORCPT ); Sat, 9 Sep 2017 02:28:38 -0400 Received: by mail-lf0-f67.google.com with SMTP id q132so1868220lfe.4; Fri, 08 Sep 2017 23:28:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:date:from:to:cc:message-id:in-reply-to:references; bh=+BMAX9OZcBfiqgoNAYSUGxwCrHPDMwBXPvxJf2diAH0=; b=XfSUF2uj93YRFroxH9Q9SG5jquATAwxPv+PXj6HADetGsuI1MiU4E45NeTSIY9K1CK FYNFk9zXBBsMQC2jdDUnCjgMDp7nis8J3BqFxO7YwnqCi/kj5LiVHTfuOR6pGYRLlYPt fymKwgcTrscb3WxkoVn7DcPBbxJ/1efktGuUjkiObPxAVrdcx6JLLvaDy4EwT4t6Q8+P 68zXk2zSnfm5M9q8jcvR4Gg2kVhIQFbuAFDITv5zoGiSPqxjrdtbvjjUqShEfhLmOHGE tx3mGu6MmoHkG0S88jHRyftUsXfbK5SqbR5W7EjK8cUCpsIPqYMcxY73/pZ3Dv9VaY2e 6JrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:date:from:to:cc:message-id:in-reply-to :references; bh=+BMAX9OZcBfiqgoNAYSUGxwCrHPDMwBXPvxJf2diAH0=; b=MJmOQQ1OAGYap5m4F8S+EA4GfsG/Mr8h5WElmFzRhoQXRfQ0OEKqBn96p7X9TsugOf Jr3mNb8Ov+x+pBFITfofAb1bgfxqfxDQjJJxswd0hfSiqy7GJecxH3D/9tEtM8TgNu3y O6XzSLXmOJamOasbatmjpTP09bPf5d4o7Z+v3kKMrgNLLsIgCPrjcGbj9XNUVUwPmT+e AzFyyyOMfA/Kzvm+8seJuzaekg/mQMbyFUBlsL6JTpik6kZiAZPcBc90F5KEltuB39XU pAd11CovgDD0bo++q9DcZaW1elqshCsfZ2tmOIP4GrmfBETvkJ8pGinfqQaWU31Oja7b MVPA== X-Gm-Message-State: AHPjjUgoLTySTk2NWrquVdwcWwpOmHCZW+p9eG3Xxnvd0C6gpnEEsMe3 kbk8d6fGRPb2vw== X-Google-Smtp-Source: AOwi7QD8qg3G/6NdeYoJ2qPY+P5knTAkP6MRSXb0LMgau5G3AJ56eqGwZCFu1J4m47gX6/R4mbyc+A== X-Received: by 10.25.115.146 with SMTP id h18mr1501229lfk.230.1504938516299; Fri, 08 Sep 2017 23:28:36 -0700 (PDT) Received: from gmail.com (tornode.torreactor.ml. [78.109.23.1]) by smtp.gmail.com with ESMTPSA id f199sm583043lfg.85.2017.09.08.23.28.34 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 08 Sep 2017 23:28:35 -0700 (PDT) Subject: [PATCH v1 3/3] net: skb_queue_purge(): lock/unlock the queue only once Date: Sat, 9 Sep 2017 05:50:23 -0000 From: Michael Witten To: "David S. Miller" , Alexey Kuznetsov , Hideaki YOSHIFUJI Cc: Stephen Hemminger , Eric Dumazet , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Message-ID: <0c67e720495b4c1a920ed0f7a7e1daeb-mfwitten@gmail.com> In-Reply-To: <60c8906b751d4915be456009c220516e-mfwitten@gmail.com> References: <45aab5effc0c424a992646a97cf2ec14-mfwitten@gmail.com> <60c8906b751d4915be456009c220516e-mfwitten@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Thanks for your input, Eric Dumazet and Stephen Hemminger; based on your observations, this version of the patch implements a very lightweight purging of the queue. To apply this patch, save this email to: /path/to/email and then run: git am --scissors /path/to/email You may also fetch this patch from GitHub: git checkout -b test 5969d1bb3082b41eba8fd2c826559abe38ccb6df git pull https://github.com/mfwitten/linux.git net/tcp-ip/01-cleanup/02 Sincerely, Michael Witten 8<----8<----8<----8<----8<----8<----8<----8<----8<----8<----8<----8<----8<---- Hitherto, the queue's lock has been locked/unlocked every time an item is dequeued; this seems not only inefficient, but also incorrect, as the whole point of `skb_queue_purge()' is to clear the queue, presumably without giving any other thread a chance to manipulate the queue in the interim. With this commit, the queue's lock is locked/unlocked only once when `skb_queue_purge()' is called, and in a way that disables the IRQs for only a minimal amount of time. This is achieved by atomically re-initializing the queue (thereby clearing it), and then freeing each of the items as though it were enqueued in a private queue that doesn't require locking. Signed-off-by: Michael Witten --- net/core/skbuff.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 68065d7d383f..bd26b0bde784 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2825,18 +2825,28 @@ struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list) EXPORT_SYMBOL(skb_dequeue_tail); /** - * skb_queue_purge - empty a list - * @list: list to empty + * skb_queue_purge - empty a queue + * @q: the queue to empty * - * Delete all buffers on an &sk_buff list. Each buffer is removed from - * the list and one reference dropped. This function takes the list - * lock and is atomic with respect to other list locking functions. + * Dequeue and free each socket buffer that is in @q. + * + * This function is atomic with respect to other queue-locking functions. */ -void skb_queue_purge(struct sk_buff_head *list) +void skb_queue_purge(struct sk_buff_head *q) { - struct sk_buff *skb; - while ((skb = skb_dequeue(list)) != NULL) + unsigned long flags; + struct sk_buff *skb, *next, *head = (struct sk_buff *)q; + + spin_lock_irqsave(&q->lock, flags); + skb = q->next; + __skb_queue_head_init(q); + spin_unlock_irqrestore(&q->lock, flags); + + while (skb != head) { + next = skb->next; kfree_skb(skb); + skb = next; + } } EXPORT_SYMBOL(skb_queue_purge);