From patchwork Tue Apr 13 07:14:17 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 50046 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id C6A64B7C33 for ; Tue, 13 Apr 2010 17:14:30 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751712Ab0DMHO0 (ORCPT ); Tue, 13 Apr 2010 03:14:26 -0400 Received: from mail-bw0-f219.google.com ([209.85.218.219]:55363 "EHLO mail-bw0-f219.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750880Ab0DMHOZ (ORCPT ); Tue, 13 Apr 2010 03:14:25 -0400 Received: by bwz19 with SMTP id 19so141531bwz.21 for ; Tue, 13 Apr 2010 00:14:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:from:to:cc :in-reply-to:references:content-type:date:message-id:mime-version :x-mailer:content-transfer-encoding; bh=4xgsd1LJUsrUVLKsNlpODw0NIJ5fr/PtlxmfjR0MGe0=; b=EsRKgHGw+T4zyEbAjytPwc5Msw84XVUFassid0W+ggIXTjaRP25PKACixGpuLI+zdI OV5LRaYiRkv2Q36keBYlCaPYq7MWsg+Kh5oBsesmNC4nHHfgcUJdb/D27/T2RlU0G/Cn 6tgYC9by59dUdolQ6SymJOUIQbjTezXBAVeNY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; b=Yw2zz+r3sRJcyQjEXwNDjD/wqqbVCtvNS81XHTNwNwnZLwivhFFmOBIbZRug7v8jdZ 1nI4M1OSWK1BVdZOiTn/qCOQjTiV/7hJB4zmZvpoJF0iNmmq86rGWaWfV72TtVssHa5p OjoJK+Up9QLSw728zhsEcskmY6weioCqFE9JA= Received: by 10.204.82.36 with SMTP id z36mr6128731bkk.88.1271142863183; Tue, 13 Apr 2010 00:14:23 -0700 (PDT) Received: from [127.0.0.1] (gw1.cosmosbay.com [212.99.114.194]) by mx.google.com with ESMTPS id 15sm2182024bwz.8.2010.04.13.00.14.20 (version=SSLv3 cipher=RC4-MD5); Tue, 13 Apr 2010 00:14:21 -0700 (PDT) Subject: Re: BUG: using smp_processor_id() in preemptible [00000000] code: avahi-daemon: caller is netif_rx From: Eric Dumazet To: Tom Herbert Cc: Eric Paris , netdev@vger.kernel.org, David Miller In-Reply-To: References: <1271100042.9831.20.camel@localhost> <1271101251.16881.135.camel@edumazet-laptop> Date: Tue, 13 Apr 2010 09:14:17 +0200 Message-ID: <1271142857.16881.193.camel@edumazet-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.28.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Le lundi 12 avril 2010 à 13:54 -0700, Tom Herbert a écrit : > Would it be better to disable preemption in netif_rx? Also note that > with RFS we would be taking rcu_read_lock in netif_rx anyway and that > could cover all the instances of smp_processor_id(). > Ok that makes sense. What do you think applying a small fix before RFS integration, it is better to have smaller patches anyway :) [PATCH net-next-2.6] net: netif_rx() must disable preemption Eric Paris reported netif_rx() is calling smp_processor_id() from preemptible context, in particular when caller is ip_dev_loopback_xmit(). RPS commit added this smp_processor_id() call, this patch makes sure preemption is disabled. rps_get_cpus() wants rcu_read_lock() anyway, we can dot it a bit earlier. Reported-by: Eric Paris Signed-off-by: Eric Dumazet --- net/core/dev.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/core/dev.c b/net/core/dev.c index a10a216..a96ea6a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2206,6 +2206,7 @@ DEFINE_PER_CPU(struct netif_rx_stats, netdev_rx_stat) = { 0, }; /* * get_rps_cpu is called from netif_receive_skb and returns the target * CPU from the RPS map of the receiving queue for a given skb. + * rcu_read_lock must be held on entry. */ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb) { @@ -2217,8 +2218,6 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb) u8 ip_proto; u32 addr1, addr2, ports, ihl; - rcu_read_lock(); - if (skb_rx_queue_recorded(skb)) { u16 index = skb_get_rx_queue(skb); if (unlikely(index >= dev->num_rx_queues)) { @@ -2296,7 +2295,6 @@ got_hash: } done: - rcu_read_unlock(); return cpu; } @@ -2392,7 +2390,7 @@ enqueue: int netif_rx(struct sk_buff *skb) { - int cpu; + int ret; /* if netpoll wants it, pretend we never saw it */ if (netpoll_rx(skb)) @@ -2402,14 +2400,21 @@ int netif_rx(struct sk_buff *skb) net_timestamp(skb); #ifdef CONFIG_RPS + { + int cpu; + + rcu_read_lock(); cpu = get_rps_cpu(skb->dev, skb); if (cpu < 0) cpu = smp_processor_id(); + ret = enqueue_to_backlog(skb, cpu); + rcu_read_unlock(); + } #else - cpu = smp_processor_id(); + ret = enqueue_to_backlog(skb, get_cpu()); + put_cpu(); #endif - - return enqueue_to_backlog(skb, cpu); + return ret; } EXPORT_SYMBOL(netif_rx);