diff mbox

[4/9] net: openvswitch: use this_cpu_ptr per-cpu helper

Message ID 5093EE59.8010609@gmail.com
State Not Applicable, archived
Delegated to: David Miller
Headers show

Commit Message

solomon Nov. 2, 2012, 4:01 p.m. UTC
From: Shan Wei <davidshan@tencent.com>

no change vs v1.

Lots of drivers use this kind to read/write per-cpu variable.
stats = this_cpu_ptr(dp->stats_percpu);
u64_stats_update_begin(&stats->sync);
 		stats->tx_packets++;
u64_stats_update_begin(&stats->sync);


Signed-off-by: Shan Wei <davidshan@tencent.com>
---
 net/openvswitch/datapath.c |    4 ++--
 net/openvswitch/vport.c    |    5 ++---
 2 files changed, 4 insertions(+), 5 deletions(-)

Comments

Christoph Lameter (Ampere) Nov. 2, 2012, 5:46 p.m. UTC | #1
On Sat, 3 Nov 2012, Shan Wei wrote:

> +++ b/net/openvswitch/datapath.c
> @@ -208,7 +208,7 @@ void ovs_dp_process_received_packet(struct vport *p, struct sk_buff *skb)
>  	int error;
>  	int key_len;
>
> -	stats = per_cpu_ptr(dp->stats_percpu, smp_processor_id());
> +	stats = this_cpu_ptr(dp->stats_percpu);
>
>  	/* Extract flow from 'skb' into 'key'. */
>  	error = ovs_flow_extract(skb, p->port_no, &key, &key_len);
> @@ -282,7 +282,7 @@ int ovs_dp_upcall(struct datapath *dp, struct sk_buff *skb,
>  	return 0;
>
>  err:
> -	stats = per_cpu_ptr(dp->stats_percpu, smp_processor_id());
> +	stats = this_cpu_ptr(dp->stats_percpu);
>
>  	u64_stats_update_begin(&stats->sync);
>  	stats->n_lost++;
> diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
> index 03779e8..70af0be 100644
> --- a/net/openvswitch/vport.c
> +++ b/net/openvswitch/vport.c
> @@ -333,8 +333,7 @@ void ovs_vport_receive(struct vport *vport, struct sk_buff *skb)
>  {
>  	struct vport_percpu_stats *stats;
>
> -	stats = per_cpu_ptr(vport->percpu_stats, smp_processor_id());
> -
> +	stats = this_cpu_ptr(vport->percpu_stats);
>  	u64_stats_update_begin(&stats->sync);
>  	stats->rx_packets++;
>  	stats->rx_bytes += skb->len;
> @@ -359,7 +358,7 @@ int ovs_vport_send(struct vport *vport, struct sk_buff *skb)
>  	if (likely(sent)) {
>  		struct vport_percpu_stats *stats;
>
> -		stats = per_cpu_ptr(vport->percpu_stats, smp_processor_id());
> +		stats = this_cpu_ptr(vport->percpu_stats);
>
>  		u64_stats_update_begin(&stats->sync);
>  		stats->tx_packets++;

Use this_cpu_inc(vport->percpu_stats->packets) here?


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
solomon Nov. 8, 2012, 2:22 p.m. UTC | #2
Christoph Lameter said, at 2012/11/3 1:46:
>>  		u64_stats_update_begin(&stats->sync);
>>  		stats->tx_packets++;
> 
> Use this_cpu_inc(vport->percpu_stats->packets) here?
 
Lots of network drivers use u64_stats_sync infrastructure for statistics
on 32bit or 64bit hosts no matter how many members in per-cpu variable. 

keep them be consistent, so no plan to change them.

Thanks 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Lameter (Ampere) Nov. 8, 2012, 5:18 p.m. UTC | #3
On Thu, 8 Nov 2012, Shan Wei wrote:

> Christoph Lameter said, at 2012/11/3 1:46:
> >>  		u64_stats_update_begin(&stats->sync);
> >>  		stats->tx_packets++;
> >
> > Use this_cpu_inc(vport->percpu_stats->packets) here?
>
> Lots of network drivers use u64_stats_sync infrastructure for statistics

So they would all have an advantage from the patch.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
solomon Nov. 9, 2012, 1:39 a.m. UTC | #4
Christoph Lameter said, at 2012/11/9 1:18:
> On Thu, 8 Nov 2012, Shan Wei wrote:
> 
>> Christoph Lameter said, at 2012/11/3 1:46:
>>>>  		u64_stats_update_begin(&stats->sync);
>>>>  		stats->tx_packets++;
>>>
>>> Use this_cpu_inc(vport->percpu_stats->packets) here?
>>
>> Lots of network drivers use u64_stats_sync infrastructure for statistics
> 
> So they would all have an advantage from the patch.

I will try to do the optimizing next time in another patchset which not included
in this series patchset.

I will submit v3 version of this series after testing today.

Thanks

 
 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 4c4b62c..77d16a5 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -208,7 +208,7 @@  void ovs_dp_process_received_packet(struct vport *p, struct sk_buff *skb)
 	int error;
 	int key_len;
 
-	stats = per_cpu_ptr(dp->stats_percpu, smp_processor_id());
+	stats = this_cpu_ptr(dp->stats_percpu);
 
 	/* Extract flow from 'skb' into 'key'. */
 	error = ovs_flow_extract(skb, p->port_no, &key, &key_len);
@@ -282,7 +282,7 @@  int ovs_dp_upcall(struct datapath *dp, struct sk_buff *skb,
 	return 0;
 
 err:
-	stats = per_cpu_ptr(dp->stats_percpu, smp_processor_id());
+	stats = this_cpu_ptr(dp->stats_percpu);
 
 	u64_stats_update_begin(&stats->sync);
 	stats->n_lost++;
diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
index 03779e8..70af0be 100644
--- a/net/openvswitch/vport.c
+++ b/net/openvswitch/vport.c
@@ -333,8 +333,7 @@  void ovs_vport_receive(struct vport *vport, struct sk_buff *skb)
 {
 	struct vport_percpu_stats *stats;
 
-	stats = per_cpu_ptr(vport->percpu_stats, smp_processor_id());
-
+	stats = this_cpu_ptr(vport->percpu_stats);
 	u64_stats_update_begin(&stats->sync);
 	stats->rx_packets++;
 	stats->rx_bytes += skb->len;
@@ -359,7 +358,7 @@  int ovs_vport_send(struct vport *vport, struct sk_buff *skb)
 	if (likely(sent)) {
 		struct vport_percpu_stats *stats;
 
-		stats = per_cpu_ptr(vport->percpu_stats, smp_processor_id());
+		stats = this_cpu_ptr(vport->percpu_stats);
 
 		u64_stats_update_begin(&stats->sync);
 		stats->tx_packets++;