diff mbox

[6/7] tcp: cache result of earlier divides when mss-aligning things

Message ID Pine.LNX.4.64.0903151041350.23360@wrl-59.cs.helsinki.fi
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Ilpo Järvinen March 15, 2009, 8:45 a.m. UTC
On Sun, 15 Mar 2009, Evgeniy Polyakov wrote:

> On Sun, Mar 15, 2009 at 02:07:54AM +0200, Ilpo Järvinen (ilpo.jarvinen@helsinki.fi) wrote:
> > @@ -676,7 +676,17 @@ static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now,
> >  				  tp->tcp_header_len);
> >  
> >  		xmit_size_goal = tcp_bound_to_half_wnd(tp, xmit_size_goal);
> > -		xmit_size_goal -= (xmit_size_goal % mss_now);
> > +
> > +		/* We try hard to avoid divides here */
> > +		old_size_goal = tp->xmit_size_goal_segs * mss_now;
> > +
> > +		if (old_size_goal <= xmit_size_goal &&
> > +		    old_size_goal + mss_now > xmit_size_goal) {
> > +			xmit_size_goal = old_size_goal;
> 
> If this is way more likely condition than changed xmit size, what about
> wrapping it into likely()?

So gcc won't read my comment? :-)

Updated below.

Comments

Evgeniy Polyakov March 15, 2009, 9:08 a.m. UTC | #1
On Sun, Mar 15, 2009 at 10:45:16AM +0200, Ilpo Järvinen (ilpo.jarvinen@helsinki.fi) wrote:
> > > @@ -676,7 +676,17 @@ static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now,
> > >  				  tp->tcp_header_len);
> > >  
> > >  		xmit_size_goal = tcp_bound_to_half_wnd(tp, xmit_size_goal);
> > > -		xmit_size_goal -= (xmit_size_goal % mss_now);
> > > +
> > > +		/* We try hard to avoid divides here */
> > > +		old_size_goal = tp->xmit_size_goal_segs * mss_now;
> > > +
> > > +		if (old_size_goal <= xmit_size_goal &&
> > > +		    old_size_goal + mss_now > xmit_size_goal) {
> > > +			xmit_size_goal = old_size_goal;
> > 
> > If this is way more likely condition than changed xmit size, what about
> > wrapping it into likely()?
> 
> So gcc won't read my comment? :-)

I heared the next gcc version will be linked with the libastral.so, but
we have to maintain backward compatibility.

> Updated below.

The whole series looks good.
David Miller March 16, 2009, 3:11 a.m. UTC | #2
From: "Ilpo Järvinen" <ilpo.jarvinen@helsinki.fi>
Date: Sun, 15 Mar 2009 10:45:16 +0200 (EET)

> [PATCHv2] tcp: cache result of earlier divides when mss-aligning things
> 
> The results is very unlikely change every so often so we
> hardly need to divide again after doing that once for a
> connection. Yet, if divide still becomes necessary we
> detect that and do the right thing and again settle for
> non-divide state. Takes the u16 space which was previously
> taken by the plain xmit_size_goal.
> 
> This should take care part of the tso vs non-tso difference
> we found earlier.
> 
> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>

Applied.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index ad2021c..9d5078b 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -248,6 +248,7 @@  struct tcp_sock {
 	/* inet_connection_sock has to be the first member of tcp_sock */
 	struct inet_connection_sock	inet_conn;
 	u16	tcp_header_len;	/* Bytes of tcp header to send		*/
+	u16	xmit_size_goal_segs; /* Goal for segmenting output packets */
 
 /*
  *	Header prediction flags
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 886596f..0db9f3b 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -665,7 +665,7 @@  static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now,
 				       int large_allowed)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
-	u32 xmit_size_goal;
+	u32 xmit_size_goal, old_size_goal;
 
 	xmit_size_goal = mss_now;
 
@@ -676,7 +676,17 @@  static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now,
 				  tp->tcp_header_len);
 
 		xmit_size_goal = tcp_bound_to_half_wnd(tp, xmit_size_goal);
-		xmit_size_goal -= (xmit_size_goal % mss_now);
+
+		/* We try hard to avoid divides here */
+		old_size_goal = tp->xmit_size_goal_segs * mss_now;
+
+		if (likely(old_size_goal <= xmit_size_goal &&
+			   old_size_goal + mss_now > xmit_size_goal)) {
+			xmit_size_goal = old_size_goal;
+		} else {
+			tp->xmit_size_goal_segs = xmit_size_goal / mss_now;
+			xmit_size_goal = tp->xmit_size_goal_segs * mss_now;
+		}
 	}
 
 	return xmit_size_goal;