diff mbox series

[net-next,v2,05/10] net: qualcomm: rmnet: Set pacing rate

Message ID 1515015787-6713-6-git-send-email-subashab@codeaurora.org
State Changes Requested, archived
Delegated to: David Miller
Headers show
Series net: qualcomm: rmnet: Enable csum offloads | expand

Commit Message

Subash Abhinov Kasiviswanathan Jan. 3, 2018, 9:43 p.m. UTC
With a default pacing rate of 10, the uplink data rate for a single
TCP stream is around 10Mbps. Setting it to 8 increases it to 146Mbps
which is the maximum supported transmit rate.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Eric Dumazet Jan. 3, 2018, 10:01 p.m. UTC | #1
On Wed, 2018-01-03 at 14:43 -0700, Subash Abhinov Kasiviswanathan
wrote:
> With a default pacing rate of 10, the uplink data rate for a single
> TCP stream is around 10Mbps. Setting it to 8 increases it to 146Mbps
> which is the maximum supported transmit rate.
> 
> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
> ---
>  drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
> index 8e1f43a..8f8c4f2 100644
> --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
> +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
> @@ -16,6 +16,7 @@
>  #include <linux/netdevice.h>
>  #include <linux/netdev_features.h>
>  #include <linux/if_arp.h>
> +#include <net/sock.h>
>  #include "rmnet_private.h"
>  #include "rmnet_config.h"
>  #include "rmnet_vnd.h"
> @@ -204,6 +205,8 @@ void rmnet_egress_handler(struct sk_buff *skb)
>  	struct rmnet_priv *priv;
>  	u8 mux_id;
>  
> +	sk_pacing_shift_update(skb->sk, 8);

Well... Please tell us why this is needed in this driver.

This interface is meant for wifi aggregation, not to work around some
strange ethernet drivers designs.
Subash Abhinov Kasiviswanathan Jan. 3, 2018, 10:45 p.m. UTC | #2
>> +	sk_pacing_shift_update(skb->sk, 8);
> 
> Well... Please tell us why this is needed in this driver.
> 
> This interface is meant for wifi aggregation, not to work around some
> strange ethernet drivers designs.

Hi Eric

The real device over which the rmnet devices are installed also
aggregate multiple IP packets and sends them as a single large aggregate
frame to the hardware.
Eric Dumazet Jan. 4, 2018, 7:44 a.m. UTC | #3
On Wed, 2018-01-03 at 15:45 -0700, Subash Abhinov Kasiviswanathan
wrote:
> > > +	sk_pacing_shift_update(skb->sk, 8);
> > 
> > Well... Please tell us why this is needed in this driver.
> > 
> > This interface is meant for wifi aggregation, not to work around some
> > strange ethernet drivers designs.
> 
> Hi Eric
> 
> The real device over which the rmnet devices are installed also
> aggregate multiple IP packets and sends them as a single large aggregate
> frame to the hardware.

It would be nice to give some details about this in the changelog.

Also what results you get with different values for the shift (10, 9,
8)

My fear is that people might be tempted to blindly use the
sk_pacing_shift_update() just because a single TCP flow gets 'better'
results.

bufferbloat is a serious issue, we do not want to allow a single TCP
flow to fill a fifo.

Otherwise, we could remove TCP Small queues overhead from the kernel
and be happy.

Thanks.
Subash Abhinov Kasiviswanathan Jan. 4, 2018, 10:43 p.m. UTC | #4
>> The real device over which the rmnet devices are installed also
>> aggregate multiple IP packets and sends them as a single large 
>> aggregate
>> frame to the hardware.
> 
> It would be nice to give some details about this in the changelog.
> 
> Also what results you get with different values for the shift (10, 9,
> 8)
> 
> My fear is that people might be tempted to blindly use the
> sk_pacing_shift_update() just because a single TCP flow gets 'better'
> results.
> 
> bufferbloat is a serious issue, we do not want to allow a single TCP
> flow to fill a fifo.
> 
> Otherwise, we could remove TCP Small queues overhead from the kernel
> and be happy.
> 
> Thanks.

The test was run with iperf single stream TCP TX for a duration of 30s.

Pacing shift | Observed data rate (Mbps)
           10 | 9
           9  | 140
           8  | 146

I will update all of this in the commit text in v3.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index 8e1f43a..8f8c4f2 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -16,6 +16,7 @@ 
 #include <linux/netdevice.h>
 #include <linux/netdev_features.h>
 #include <linux/if_arp.h>
+#include <net/sock.h>
 #include "rmnet_private.h"
 #include "rmnet_config.h"
 #include "rmnet_vnd.h"
@@ -204,6 +205,8 @@  void rmnet_egress_handler(struct sk_buff *skb)
 	struct rmnet_priv *priv;
 	u8 mux_id;
 
+	sk_pacing_shift_update(skb->sk, 8);
+
 	orig_dev = skb->dev;
 	priv = netdev_priv(orig_dev);
 	skb->dev = priv->real_dev;