From patchwork Tue Jun 1 17:45:22 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Narendra K X-Patchwork-Id: 54269 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 65BFEB7D1B for ; Wed, 2 Jun 2010 03:56:20 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756735Ab0FARzp (ORCPT ); Tue, 1 Jun 2010 13:55:45 -0400 Received: from ausxipps301.us.dell.com ([143.166.148.223]:60745 "EHLO ausxipps301.us.dell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754881Ab0FARzo convert rfc822-to-8bit (ORCPT ); Tue, 1 Jun 2010 13:55:44 -0400 X-Greylist: delayed 584 seconds by postgrey-1.27 at vger.kernel.org; Tue, 01 Jun 2010 13:55:44 EDT X-Loopcount0: from 10.166.72.162 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Subject: Call trace related to bonding seen in 2.6.34 Date: Tue, 1 Jun 2010 23:15:22 +0530 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Call trace related to bonding seen in 2.6.34 Thread-Index: AcsBsjVOMNkWednvRUC9xfo/76AvVw== From: To: Cc: X-OriginalArrivalTime: 01 Jun 2010 17:45:23.0160 (UTC) FILETIME=[35FA6580:01CB01B2] Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Hello, Call trace related to bond_mii_monitor as described in this thread - http://patchwork.ozlabs.org/patch/41288/ was seen on 2.6.34 kernel. (Trace is similar to what is described in the post dated 2009-12-17 21:31:36.) The trace is seen when the network service is stopped. The issue occurs when the network service is started and stopped in quick succession. Bonding device configuration parameters are as below - Bonding driver version:3.6.0 Mode: balance-alb (issue is also seen with active-backup mode) Miimon=100 3 slaves with link up and one slave with link down. Though this requires more thought and investigation, I thought this could be a good data point. The below change to the bonding driver seemed to make the issue go away - drivers/net/bonding/bond_main.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) out: Any thoughts ? With regards, Narendra K --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 0075514..f280aaf 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -2408,7 +2408,7 @@ void bond_mii_monitor(struct work_struct *work) } re_arm: - if (bond->params.miimon) + if (bond->params.miimon && !bond->kill_timers) queue_delayed_work(bond->wq, &bond->mii_work, msecs_to_jiffies(bond->params.miimon));