Message ID | 20140102.174136.1587468571877727139.davem@davemloft.net |
---|---|
State | RFC, archived |
Delegated to: | David Miller |
Headers | show |
On Thu, Jan 2, 2014 at 2:41 PM, David Miller <davem@davemloft.net> wrote: > > Yes, something like the following untested patch. I did that initially too, but that doesn't work, because the code inside the skb_queue_walk() loop ends up calling irlap_send_i_frame() which calls irlap_queue_xmit(), which does dev_queue_xmit(). Which will do a bh_disable/bh_enable() (as part of spin_[un]lock_bh(), iirc). So you can't use "spin_lock_irqsave()", because it doesn't nest around a softirq-disable. You could use spin_lock_bh(), but if it's true that it's all in a softirq context, I think it should be safe to just do "spin_lock()". Afaik, there is nothing that actually does anything in real hardirq context in there. I *think* all the network _rcv() functions are called from softirqs, right? Linus -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
From: Linus Torvalds <torvalds@linux-foundation.org> Date: Thu, 2 Jan 2014 14:47:44 -0800 > You could use spin_lock_bh(), but if it's true that it's all in a > softirq context, I think it should be safe to just do "spin_lock()". > Afaik, there is nothing that actually does anything in real hardirq > context in there. I *think* all the network _rcv() functions are > called from softirqs, right? I'm worried that mixing hardIRQ locking (via the direct calls to skb_dequeue() et al.) with explicit softIRQ locking will make lockdep complain. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jan 2, 2014 at 4:05 PM, David Miller <davem@davemloft.net> wrote: > > I'm worried that mixing hardIRQ locking (via the direct calls to > skb_dequeue() et al.) with explicit softIRQ locking will make lockdep > complain. You're probably right. I don't see how to do it right, though. lockdep (or maybe it's DEBUG_ATOMIC_SLEEP or whatever) definitely complains about the "enable softirq's with hardirqs disabled" part, and the way the queueing is set up in irda, I don't see how else to do it. Unless it would be ok to only re-send the oldest packet, not all of them. And thus get rid of the loop entirely. But with the obvious lack of testing irda is getting, that kind of change sounds like a really bad idea. The warning is, in case you care: WARNING: CPU: 2 PID: 0 at kernel/softirq.c:156 local_bh_enable+0x62/0x90() Call Trace: dev_queue_xmit irlap_queue_xmit irlap_send_i_frame irlap_resend_rejected_frames irlap_state_nrm_s irlap_do_event irlap_driver_rcv __netif_receive_skb_core and without the _irqsave() at least that part is fine. But no, I didn't try it with lockdep, and I think you're right that it would break. Oh well. Since I can't seem to get my test-case to work anyway, I guess it doesn't much matter.. Linus -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/net/irda/irlap.c b/net/irda/irlap.c index 005b424..1334625 100644 --- a/net/irda/irlap.c +++ b/net/irda/irlap.c @@ -698,19 +698,22 @@ int irlap_generate_rand_time_slot(int S, int s) void irlap_update_nr_received(struct irlap_cb *self, int nr) { struct sk_buff *skb = NULL; + unsigned long flags; int count = 0; /* * Remove all the ack-ed frames from the window queue. */ + spin_lock_irqsave(&self->wx_list.lock, flags); + /* * Optimize for the common case. It is most likely that the receiver * will acknowledge all the frames we have sent! So in that case we * delete all frames stored in window. */ if (nr == self->vs) { - while ((skb = skb_dequeue(&self->wx_list)) != NULL) { + while ((skb = __skb_dequeue(&self->wx_list)) != NULL) { dev_kfree_skb(skb); } /* The last acked frame is the next to send minus one */ @@ -720,7 +723,7 @@ void irlap_update_nr_received(struct irlap_cb *self, int nr) while ((skb_peek(&self->wx_list) != NULL) && (((self->va+1) % 8) != nr)) { - skb = skb_dequeue(&self->wx_list); + skb = __skb_dequeue(&self->wx_list); dev_kfree_skb(skb); self->va = (self->va + 1) % 8; @@ -730,6 +733,8 @@ void irlap_update_nr_received(struct irlap_cb *self, int nr) /* Advance window */ self->window = self->window_size - skb_queue_len(&self->wx_list); + + spin_unlock_irqrestore(&self->wx_list.lock, flags); } /* diff --git a/net/irda/irlap_frame.c b/net/irda/irlap_frame.c index 9ea0c93..45fd93f 100644 --- a/net/irda/irlap_frame.c +++ b/net/irda/irlap_frame.c @@ -983,10 +983,13 @@ void irlap_resend_rejected_frames(struct irlap_cb *self, int command) { struct sk_buff *tx_skb; struct sk_buff *skb; + unsigned long flags; IRDA_ASSERT(self != NULL, return;); IRDA_ASSERT(self->magic == LAP_MAGIC, return;); + spin_lock_irqsave(&self->wx_list.lock, flags); + /* Resend unacknowledged frame(s) */ skb_queue_walk(&self->wx_list, skb) { irlap_wait_min_turn_around(self, &self->qos_tx); @@ -1039,16 +1042,20 @@ void irlap_resend_rejected_frames(struct irlap_cb *self, int command) } } #endif + spin_unlock_irqrestore(&self->wx_list.lock, flags); } void irlap_resend_rejected_frame(struct irlap_cb *self, int command) { struct sk_buff *tx_skb; + unsigned long flags; struct sk_buff *skb; IRDA_ASSERT(self != NULL, return;); IRDA_ASSERT(self->magic == LAP_MAGIC, return;); + spin_lock_irqsave(&self->wx_list.lock, flags); + /* Resend unacknowledged frame(s) */ skb = skb_peek(&self->wx_list); if (skb != NULL) { @@ -1072,6 +1079,8 @@ void irlap_resend_rejected_frame(struct irlap_cb *self, int command) irlap_send_i_frame(self, tx_skb, command); } + + spin_unlock_irqrestore(&self->wx_list.lock, flags); } /*