diff mbox

[RFC,1/2] mac80211: make max_network_latency notifier atomic safe

Message ID 1276074915-26879-1-git-send-email-florian@mickler.org
State Not Applicable, archived
Delegated to: David Miller
Headers show

Commit Message

florian@mickler.org June 9, 2010, 9:15 a.m. UTC
In order to have the pm_qos framework be callable from interrupt
context, all listeners have to also be callable in that context.

This patch schedules the update of the latency value via
ieee80211_max_network_latency() for execution on
the global workqueue (using schedule_work()).

As there was no synchronization/locking between the listener and the
caller of pm_qos_update_request before, there should be no new races
uncovered by this. Although the timing probably changes.

Signed-off-by: Florian Mickler <florian@mickler.org>
---

This needs some networking expertise to check the reasoning above and
judge the implementation.

 net/mac80211/ieee80211_i.h |    4 +++-
 net/mac80211/main.c        |    5 +++--
 net/mac80211/mlme.c        |   20 +++++++++++++++-----
 3 files changed, 21 insertions(+), 8 deletions(-)

Comments

Johannes Berg June 9, 2010, 9:38 a.m. UTC | #1
On Wed, 2010-06-09 at 11:15 +0200, florian@mickler.org wrote:
> In order to have the pm_qos framework be callable from interrupt
> context, all listeners have to also be callable in that context.

That makes no sense at all. Why add work structs _everywhere_ in the
callees and make the API harder to use and easy to get wrong completely,
instead of just adding a single work struct that will be queued from the
caller and dealing with the locking complexity etc. just once.

johannes

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
florian@mickler.org June 9, 2010, 10:20 a.m. UTC | #2
On Wed, 09 Jun 2010 11:38:07 +0200
Johannes Berg <johannes@sipsolutions.net> wrote:

> On Wed, 2010-06-09 at 11:15 +0200, florian@mickler.org wrote:
> > In order to have the pm_qos framework be callable from interrupt
> > context, all listeners have to also be callable in that context.
> 
> That makes no sense at all. Why add work structs _everywhere_ in the
> callees and make the API harder to use and easy to get wrong completely,
> instead of just adding a single work struct that will be queued from the
> caller and dealing with the locking complexity etc. just once.
> 
> johannes

Just to defend this approach, but I'm certainly not married to it
(hence RFC):

There are only two listeners at the moment. I suspect that most future
uses of the framework need to be atomic, as the driver that
requests a specific quality of service probably doesn't want to get into
races with the provider of that service(listener). So i suspected the
network listener to be the special case.  

The race between service-provider and qos-requester for non-atomic
contextes is already there, isn't it? so, locking complexity shouldn't
be worse than before.

But my first approach to this is seen here:
https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/026902.html

A third possibility would be to make it dependent on the
type of the constraint, if blocking notifiers are allowed or not. 
But that would sacrifice API consistency (update_request for one
constraint is allowed to be called in interrupt context and
update_request for another would be not).

Cheers,
Flo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Johannes Berg June 9, 2010, 10:42 a.m. UTC | #3
On Wed, 2010-06-09 at 12:20 +0200, Florian Mickler wrote:
> On Wed, 09 Jun 2010 11:38:07 +0200
> Johannes Berg <johannes@sipsolutions.net> wrote:
> 
> > On Wed, 2010-06-09 at 11:15 +0200, florian@mickler.org wrote:
> > > In order to have the pm_qos framework be callable from interrupt
> > > context, all listeners have to also be callable in that context.
> > 
> > That makes no sense at all. Why add work structs _everywhere_ in the
> > callees and make the API harder to use and easy to get wrong completely,
> > instead of just adding a single work struct that will be queued from the
> > caller and dealing with the locking complexity etc. just once.


> There are only two listeners at the moment. I suspect that most future
> uses of the framework need to be atomic, as the driver that
> requests a specific quality of service probably doesn't want to get into
> races with the provider of that service(listener). So i suspected the
> network listener to be the special case.  

Well even if it doesn't _want_ to race with it, a lot of drivers like
USB drivers etc. can't really do anything without deferring to a
workqueue.

And what's the race anyway? You get one update, defer the work, and if
another update happens inbetween you just read the new value when the
work finally runs -- and you end up doing it only once instead of twice.
That doesn't seem like a problem.

> The race between service-provider and qos-requester for non-atomic
> contextes is already there, isn't it? so, locking complexity shouldn't
> be worse than before.

I have no idea how it works now? I thought you can't request an update
from an atomic context.

However, if you request a QoS value, it is fundamentally that -- a
request. There's no guarantee as to when or how it will be honoured.

> But my first approach to this is seen here:
> https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/026902.html

Icky too.

> A third possibility would be to make it dependent on the
> type of the constraint, if blocking notifiers are allowed or not. 
> But that would sacrifice API consistency (update_request for one
> constraint is allowed to be called in interrupt context and
> update_request for another would be not).

I don't see what's wrong with the fourth possibility: Allow calling
pm_qos_update_request() from atomic context, but change _it_ to schedule
off a work that calls the blocking notifier chain. That avoids the
complexity in notify-API users since they have process context, and also
in request-API users since they can call it from any context.

johannes

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
florian@mickler.org June 9, 2010, 12:16 p.m. UTC | #4
On Wed, 09 Jun 2010 12:42:08 +0200
Johannes Berg <johannes@sipsolutions.net> wrote:

> On Wed, 2010-06-09 at 12:20 +0200, Florian Mickler wrote:
> 
> > A third possibility would be to make it dependent on the
> > type of the constraint, if blocking notifiers are allowed or not. 
> > But that would sacrifice API consistency (update_request for one
> > constraint is allowed to be called in interrupt context and
> > update_request for another would be not).
> 
> I don't see what's wrong with the fourth possibility: Allow calling
> pm_qos_update_request() from atomic context, but change _it_ to schedule
> off a work that calls the blocking notifier chain. That avoids the
> complexity in notify-API users since they have process context, and also
> in request-API users since they can call it from any context.
> 
> johannes

That was also my first idea, but then I thought about qos and thought
atomic notification are necessary.
Do you see any value in having atomic
notification? 

I have the following situation before my eyes:

	Driver A gets an interrupt and needs (to service that
	interrupt) the cpu to guarantee a latency of X because the
	device is a bit icky.

Now, in that situation, if we don't immediately (without scheduling in
between) notify the system to be in that latency-mode the driver won't
function properly. Is this a realistic scene?

At the moment we only have process context notification and only 2
listeners.

I think providing for atomic as well as "relaxed" notification could be
useful. 

If atomic notification is deemed unnecessary, I have no
problems to just use schedule_work() in update request.
Anyway, it is probably best to split this. I.e. first make
update_request callable from atomic contexts with doing the
schedule_work in update_request and then
as an add on provide for constraints_objects with atomic notifications.

Flo





--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Johannes Berg June 9, 2010, 12:27 p.m. UTC | #5
On Wed, 2010-06-09 at 14:16 +0200, Florian Mickler wrote:

> That was also my first idea, but then I thought about qos and thought
> atomic notification are necessary.
> Do you see any value in having atomic notification? 
> 
> I have the following situation before my eyes:
> 
> 	Driver A gets an interrupt and needs (to service that
> 	interrupt) the cpu to guarantee a latency of X because the
> 	device is a bit icky.
> 
> Now, in that situation, if we don't immediately (without scheduling in
> between) notify the system to be in that latency-mode the driver won't
> function properly. Is this a realistic scene?
> 
> At the moment we only have process context notification and only 2
> listeners.
> 
> I think providing for atomic as well as "relaxed" notification could be
> useful. 
> 
> If atomic notification is deemed unnecessary, I have no
> problems to just use schedule_work() in update request.
> Anyway, it is probably best to split this. I.e. first make
> update_request callable from atomic contexts with doing the
> schedule_work in update_request and then
> as an add on provide for constraints_objects with atomic notifications.

Well I remember http://thread.gmane.org/gmane.linux.kernel/979935 where
Mark renamed things to "request" which seems to imply to me more of a
"please do this" than "I NEED IT NOW!!!!!".

johannes

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
florian@mickler.org June 9, 2010, 3:37 p.m. UTC | #6
On Wed, 09 Jun 2010 14:27:05 +0200
Johannes Berg <johannes@sipsolutions.net> wrote:

> On Wed, 2010-06-09 at 14:16 +0200, Florian Mickler wrote:
> 
> > That was also my first idea, but then I thought about qos and thought
> > atomic notification are necessary.
> > Do you see any value in having atomic notification? 
> > 
> > I have the following situation before my eyes:
> > 
> > 	Driver A gets an interrupt and needs (to service that
> > 	interrupt) the cpu to guarantee a latency of X because the
> > 	device is a bit icky.
> > 
> > Now, in that situation, if we don't immediately (without scheduling in
> > between) notify the system to be in that latency-mode the driver won't
> > function properly. Is this a realistic scene?
> > 
> > At the moment we only have process context notification and only 2
> > listeners.
> > 
> > I think providing for atomic as well as "relaxed" notification could be
> > useful. 
> > 
> > If atomic notification is deemed unnecessary, I have no
> > problems to just use schedule_work() in update request.
> > Anyway, it is probably best to split this. I.e. first make
> > update_request callable from atomic contexts with doing the
> > schedule_work in update_request and then
> > as an add on provide for constraints_objects with atomic notifications.
> 
> Well I remember http://thread.gmane.org/gmane.linux.kernel/979935 where
> Mark renamed things to "request" which seems to imply to me more of a
> "please do this" than "I NEED IT NOW!!!!!".
> 
> johannes

Yes. I just posted a version which uses schedule_work().
Just FYI, James has also posted his version which uses either a blocking
or an atomic notifier chain.
http://article.gmane.org/gmane.linux.kernel/996813

Cheers,
Flo

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
index 1a9e2da..3d2155a 100644
--- a/net/mac80211/ieee80211_i.h
+++ b/net/mac80211/ieee80211_i.h
@@ -851,6 +851,7 @@  struct ieee80211_local {
 	struct work_struct dynamic_ps_disable_work;
 	struct timer_list dynamic_ps_timer;
 	struct notifier_block network_latency_notifier;
+	struct work_struct network_latency_notifier_work;
 
 	int user_power_level; /* in dBm */
 	int power_constr_level; /* in dBm */
@@ -994,8 +995,9 @@  ieee80211_rx_result ieee80211_sta_rx_mgmt(struct ieee80211_sub_if_data *sdata,
 void ieee80211_send_pspoll(struct ieee80211_local *local,
 			   struct ieee80211_sub_if_data *sdata);
 void ieee80211_recalc_ps(struct ieee80211_local *local, s32 latency);
-int ieee80211_max_network_latency(struct notifier_block *nb,
+int ieee80211_max_network_latency_notification(struct notifier_block *nb,
 				  unsigned long data, void *dummy);
+void ieee80211_max_network_latency(struct work_struct *w);
 void ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
 				      struct ieee80211_channel_sw_ie *sw_elem,
 				      struct ieee80211_bss *bss,
diff --git a/net/mac80211/main.c b/net/mac80211/main.c
index 22a384d..5cded3a 100644
--- a/net/mac80211/main.c
+++ b/net/mac80211/main.c
@@ -607,9 +607,10 @@  int ieee80211_register_hw(struct ieee80211_hw *hw)
 	rtnl_unlock();
 
 	ieee80211_led_init(local);
-
+	INIT_WORK(&local->network_latency_notifier_work,
+			ieee80211_max_network_latency);
 	local->network_latency_notifier.notifier_call =
-		ieee80211_max_network_latency;
+		ieee80211_max_network_latency_notification;
 	result = pm_qos_add_notifier(PM_QOS_NETWORK_LATENCY,
 				     &local->network_latency_notifier);
 
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
index 0839c4e..feb6134 100644
--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -1953,17 +1953,27 @@  void ieee80211_mlme_notify_scan_completed(struct ieee80211_local *local)
 	rcu_read_unlock();
 }
 
-int ieee80211_max_network_latency(struct notifier_block *nb,
-				  unsigned long data, void *dummy)
+void ieee80211_max_network_latency(struct work_struct *w)
 {
-	s32 latency_usec = (s32) data;
+	s32 latency_usec = (s32) atomic_long_read(&w->data);
 	struct ieee80211_local *local =
-		container_of(nb, struct ieee80211_local,
-			     network_latency_notifier);
+		container_of(w, struct ieee80211_local,
+			     network_latency_notifier_work);
 
 	mutex_lock(&local->iflist_mtx);
 	ieee80211_recalc_ps(local, latency_usec);
 	mutex_unlock(&local->iflist_mtx);
+}
+
+/* the notifier might be called from interrupt context */
+int ieee80211_max_network_latency_notification(struct notifier_block *nb,
+				  unsigned long data, void *dummy)
+{
+	struct ieee80211_local *local =
+		container_of(nb, struct ieee80211_local,
+				network_latency_notifier);
+	atomic_long_set(&local->network_latency_notifier_work.data, data);
+	schedule_work(&local->network_latency_notifier_work);
 
 	return 0;
 }