diff mbox series

busy_poll* : revise poll_cmp

Message ID c999a175-d02b-beda-7418-95a5b0f2d432@cn.fujitsu.com
State Rejected
Delegated to: Cyril Hrubis
Headers show
Series busy_poll* : revise poll_cmp | expand

Commit Message

Sun Lianwen March 19, 2018, 8:18 a.m. UTC
set low latency busy poll to 50 per socket will
spent more time than set low latency busy poll
to 0 per socket. that mean the value of res_50
is bigger than res_0, so the value of poll_comp
always less than zero and  test fail.

Signed-off-by: Lianwen Sun <sunlw.fnst@cn.fujitsu.com>
---
  testcases/network/busy_poll/busy_poll01.sh | 2 +-
  testcases/network/busy_poll/busy_poll02.sh | 2 +-
  2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Alexey Kodanev March 19, 2018, 12:25 p.m. UTC | #1
On 19.03.2018 11:18, sunlianwen wrote:
> set low latency busy poll to 50 per socket will
> spent more time than set low latency busy poll
> to 0 per socket. that mean the value of res_50
> is bigger than res_0, so the value of poll_comp
> always less than zero and  test fail.

No, if busy poll is enabled, i.e. value set above 0, we should expect
performance gain in this test.

With what driver and kernel version you have such results?
Is it really supported by that driver?

Thanks,
Alexey
Sun Lianwen March 20, 2018, 12:18 a.m. UTC | #2
Hi Alexey

   You are right, I think is wrong, I debug this case again,

and find the driver is virtio_net no support busy poll.

Thanks your advise.

Best Regards,

Lianwen Sun.


On 03/19/2018 08:25 PM, Alexey Kodanev wrote:
> On 19.03.2018 11:18, sunlianwen wrote:
>> set low latency busy poll to 50 per socket will
>> spent more time than set low latency busy poll
>> to 0 per socket. that mean the value of res_50
>> is bigger than res_0, so the value of poll_comp
>> always less than zero and  test fail.
> No, if busy poll is enabled, i.e. value set above 0, we should expect
> performance gain in this test.
>
> With what driver and kernel version you have such results?
> Is it really supported by that driver?
>
> Thanks,
> Alexey
>
>
>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#C0C0C0">
    <p>Hi Alexey</p>
    <p>  You are right, I think is wrong, I debug this case again,</p>
    <p>and find the driver is virtio_net no support busy poll.</p>
    <p>Thanks your advise.</p>
    <p>Best Regards,<br>
    </p>
    <p>Lianwen Sun.<br>
    </p>
    <br>
    <div class="moz-cite-prefix">On 03/19/2018 08:25 PM, Alexey Kodanev
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:fae2ea47-a416-f2b4-ee2b-f022f14a14a0@oracle.com">
      <pre wrap="">On 19.03.2018 11:18, sunlianwen wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">set low latency busy poll to 50 per socket will
spent more time than set low latency busy poll
to 0 per socket. that mean the value of res_50
is bigger than res_0, so the value of poll_comp
always less than zero and  test fail.
</pre>
      </blockquote>
      <pre wrap="">
No, if busy poll is enabled, i.e. value set above 0, we should expect
performance gain in this test.

With what driver and kernel version you have such results?
Is it really supported by that driver?

Thanks,
Alexey



</pre>
    </blockquote>
    <br>
  </body>
</html>
Sun Lianwen March 20, 2018, 12:20 a.m. UTC | #3
On 03/19/2018 08:25 PM, Alexey Kodanev wrote:
> On 19.03.2018 11:18, sunlianwen wrote:
>> set low latency busy poll to 50 per socket will
>> spent more time than set low latency busy poll
>> to 0 per socket. that mean the value of res_50
>> is bigger than res_0, so the value of poll_comp
>> always less than zero and  test fail.
> No, if busy poll is enabled, i.e. value set above 0, we should expect
> performance gain in this test.
>
> With what driver and kernel version you have such results?
> Is it really supported by that driver?
the kernel version is 4.16-rc5, the driver is virtio_net.
> Thanks,
> Alexey
>
>
>
Thanks,
Lianwen Sun
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#C0C0C0">
    <p><br>
    </p>
    <br>
    <div class="moz-cite-prefix">On 03/19/2018 08:25 PM, Alexey Kodanev
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:fae2ea47-a416-f2b4-ee2b-f022f14a14a0@oracle.com">
      <pre wrap="">On 19.03.2018 11:18, sunlianwen wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">set low latency busy poll to 50 per socket will
spent more time than set low latency busy poll
to 0 per socket. that mean the value of res_50
is bigger than res_0, so the value of poll_comp
always less than zero and  test fail.
</pre>
      </blockquote>
      <pre wrap="">
No, if busy poll is enabled, i.e. value set above 0, we should expect
performance gain in this test.

With what driver and kernel version you have such results?
Is it really supported by that driver?
</pre>
    </blockquote>
    the kernel version is 4.16-rc5, the driver is virtio_net.<br>
    <blockquote type="cite"
      cite="mid:fae2ea47-a416-f2b4-ee2b-f022f14a14a0@oracle.com">
      <pre wrap="">
Thanks,
Alexey



</pre>
    </blockquote>
    Thanks,<br>
    Lianwen Sun<br>
  </body>
</html>
Alexey Kodanev March 21, 2018, 10:50 a.m. UTC | #4
On 03/20/2018 03:18 AM, sunlianwen wrote:
> Hi Alexey
> 
>   You are right, I think is wrong, I debug this case again,
> 
> and find the driver is virtio_net no support busy poll.


There is support in virtio_net... may be the problem in the underlying
configuration/driver, latency between guest and the other host? you could
also try netperf -H remote_host -t TCP_RR with/without busy_polling:

# sysctl net.core.busy_read=50
# sysctl net.core.busy_poll=50

Thanks,
Alexey
Sun Lianwen March 22, 2018, 8:03 a.m. UTC | #5
Hi Alexey

On 03/21/2018 06:50 PM, Alexey Kodanev wrote:
> On 03/20/2018 03:18 AM, sunlianwen wrote:
>> Hi Alexey
>>
>>    You are right, I think is wrong, I debug this case again,
>>
>> and find the driver is virtio_net no support busy poll.
>
> There is support in virtio_net... may be the problem in the underlying
> configuration/driver, latency between guest and the other host? you could
> also try netperf -H remote_host -t TCP_RR with/without busy_polling:
>
> # sysctl net.core.busy_read=50
> # sysctl net.core.busy_poll=50
>
Thanks your advise. and I find a patch:"virtio_net: remove custom busy_poll"
patch link 
:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d
I am not sure  whether this patch  mean virtio_net no support busy poll.

Below is debuginfo follow your advise.

# sysctl net.core.busy_read=0
net.core.busy_read = 0

# sysctl net.core.busy_poll=0
net.core.busy_poll = 0

# netperf -H 192.168.122.248 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
to 192.168.122.248 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate
bytes  Bytes  bytes    bytes   secs.    per sec

16384  87380  1        1       10.00    30101.63
16384  87380

# sysctl net.core.busy_read=50
net.core.busy_read = 50
# sysctl net.core.busy_poll=50
net.core.busy_poll = 50

# netperf -H 192.168.122.248 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
to 192.168.122.248 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate
bytes  Bytes  bytes    bytes   secs.    per sec

16384  87380  1        1       10.00    37968.90
16384  87380

-----------------------------------------------------------------------
<<<test_output>>>
incrementing stop
busy_poll01 1 TINFO: Network config (local -- remote):
busy_poll01 1 TINFO: eth1 -- eth1
busy_poll01 1 TINFO: 192.168.1.41/24 -- 192.168.1.20/24
busy_poll01 1 TINFO: fd00:1:1:1::1/64 -- fd00:1:1:1::2/64
busy_poll01 1 TINFO: set low latency busy poll to 50
busy_poll01 1 TINFO: run server 'netstress -R 500000 -B 
/tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
busy_poll01 1 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 
500000 -d res_50 -g 44175'
busy_poll01 1 TPASS: netstress passed, time spent '53265' ms
busy_poll01 2 TINFO: set low latency busy poll to 0
busy_poll01 2 TINFO: run server 'netstress -R 500000 -B 
/tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
busy_poll01 2 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 
500000 -d res_0 -g 46767'
busy_poll01 2 TPASS: netstress passed, time spent '23393' ms
busy_poll01 3 TFAIL: busy poll result is '-127' %
<<<execution_status>>>
initiation_status="ok"
duration=79 termination_type=exited termination_id=1 corefile=no
cutime=148 cstime=6930
<<<test_end>>>
INFO: ltp-pan reported some tests FAIL
LTP Version: 20180118

###############################################################

             Done executing testcases.
             LTP Version:  20180118
###############################################################

Thanks,
Lianwen Sun
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#C0C0C0">
    <p>Hi Alexey<br>
    </p>
    <div class="moz-cite-prefix">On 03/21/2018 06:50 PM, Alexey Kodanev
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:4f48b8e7-21ca-088e-24eb-c5b79c3bac18@oracle.com">
      <pre wrap="">On 03/20/2018 03:18 AM, sunlianwen wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">Hi Alexey

  You are right, I think is wrong, I debug this case again,

and find the driver is virtio_net no support busy poll.
</pre>
      </blockquote>
      <pre wrap="">

There is support in virtio_net... may be the problem in the underlying
configuration/driver, latency between guest and the other host? you could
also try netperf -H remote_host -t TCP_RR with/without busy_polling:

# sysctl net.core.busy_read=50
# sysctl net.core.busy_poll=50

</pre>
    </blockquote>
    Thanks your advise. and I find a patch:"virtio_net: remove custom
    busy_poll"<br>
    patch link
:<a class="moz-txt-link-freetext" href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d">https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d</a><br>
    I am not sure  whether this patch  mean virtio_net no support busy
    poll.<br>
    <br>
    Below is debuginfo follow your advise.<br>
    <br>
    # sysctl net.core.busy_read=0<br>
    net.core.busy_read = 0<br>
    <br>
    # sysctl net.core.busy_poll=0<br>
    net.core.busy_poll = 0<br>
    <br>
    # netperf -H 192.168.122.248 -t TCP_RR<br>
    MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
    AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0<br>
    Local /Remote<br>
    Socket Size   Request  Resp.   Elapsed  Trans.<br>
    Send   Recv   Size     Size    Time     Rate<br>
    bytes  Bytes  bytes    bytes   secs.    per sec<br>
    <br>
    16384  87380  1        1       10.00    30101.63<br>
    16384  87380<br>
    <br>
    # sysctl net.core.busy_read=50<br>
    net.core.busy_read = 50<br>
    # sysctl net.core.busy_poll=50<br>
    net.core.busy_poll = 50<br>
    <br>
    # netperf -H 192.168.122.248 -t TCP_RR<br>
    MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
    AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0<br>
    Local /Remote<br>
    Socket Size   Request  Resp.   Elapsed  Trans.<br>
    Send   Recv   Size     Size    Time     Rate<br>
    bytes  Bytes  bytes    bytes   secs.    per sec<br>
    <br>
    16384  87380  1        1       10.00    37968.90<br>
    16384  87380<br>
    <br>
-----------------------------------------------------------------------<br>
    &lt;&lt;&lt;test_output&gt;&gt;&gt;<br>
    incrementing stop<br>
    busy_poll01 1 TINFO: Network config (local -- remote):<br>
    busy_poll01 1 TINFO: eth1 -- eth1<br>
    busy_poll01 1 TINFO: 192.168.1.41/24 -- 192.168.1.20/24<br>
    busy_poll01 1 TINFO: fd00:1:1:1::1/64 -- fd00:1:1:1::2/64<br>
    busy_poll01 1 TINFO: set low latency busy poll to 50<br>
    busy_poll01 1 TINFO: run server 'netstress -R 500000 -B
    /tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'<br>
    busy_poll01 1 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2
    -r 500000 -d res_50 -g 44175'<br>
    busy_poll01 1 TPASS: netstress passed, time spent '53265' ms<br>
    busy_poll01 2 TINFO: set low latency busy poll to 0<br>
    busy_poll01 2 TINFO: run server 'netstress -R 500000 -B
    /tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'<br>
    busy_poll01 2 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2
    -r 500000 -d res_0 -g 46767'<br>
    busy_poll01 2 TPASS: netstress passed, time spent '23393' ms<br>
    busy_poll01 3 TFAIL: busy poll result is '-127' %<br>
    &lt;&lt;&lt;execution_status&gt;&gt;&gt;<br>
    initiation_status="ok"<br>
    duration=79 termination_type=exited termination_id=1 corefile=no<br>
    cutime=148 cstime=6930<br>
    &lt;&lt;&lt;test_end&gt;&gt;&gt;<br>
    INFO: ltp-pan reported some tests FAIL<br>
    LTP Version: 20180118<br>
    <br>
          
    ###############################################################<br>
    <br>
                Done executing testcases.<br>
                LTP Version:  20180118<br>
          
    ###############################################################<br>
    <br>
    Thanks,<br>
    Lianwen Sun<br>
  </body>
</html>
Alexey Kodanev March 22, 2018, 11:14 a.m. UTC | #6
On 22.03.2018 11:03, sunlianwen wrote:
> Hi Alexey
> 
> On 03/21/2018 06:50 PM, Alexey Kodanev wrote:
>> On 03/20/2018 03:18 AM, sunlianwen wrote:
>>> Hi Alexey
>>>
>>>   You are right, I think is wrong, I debug this case again,
>>>
>>> and find the driver is virtio_net no support busy poll.
>>
>> There is support in virtio_net... may be the problem in the underlying
>> configuration/driver, latency between guest and the other host? you could
>> also try netperf -H remote_host -t TCP_RR with/without busy_polling:
>>
>> # sysctl net.core.busy_read=50
>> # sysctl net.core.busy_poll=50
>>
> Thanks your advise. and I find a patch:"virtio_net: remove custom busy_poll"
> patch link :https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d
> I am not sure  whether this patch  mean virtio_net no support busy poll.

It means it is using generic busy polling implementation now.

> 
> Below is debuginfo follow your advise.
> 
> # sysctl net.core.busy_read=0
> net.core.busy_read = 0
> 
> # sysctl net.core.busy_poll=0
> net.core.busy_poll = 0
> 
> # netperf -H 192.168.122.248 -t TCP_RR
> MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0
> Local /Remote
> Socket Size   Request  Resp.   Elapsed  Trans.
> Send   Recv   Size     Size    Time     Rate
> bytes  Bytes  bytes    bytes   secs.    per sec
> 
> 16384  87380  1        1       10.00    30101.63
> 16384  87380
> 
> # sysctl net.core.busy_read=50
> net.core.busy_read = 50
> # sysctl net.core.busy_poll=50
> net.core.busy_poll = 50
> 
> # netperf -H 192.168.122.248 -t TCP_RR
> MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0
> Local /Remote
> Socket Size   Request  Resp.   Elapsed  Trans.
> Send   Recv   Size     Size    Time     Rate
> bytes  Bytes  bytes    bytes   secs.    per sec
> 
> 16384  87380  1        1       10.00    37968.90
> 16384  87380
> 

Looks like busy polling is working, transaction rate is obviously faster
than without busy polling.

> -----------------------------------------------------------------------
> <<<test_output>>>
> incrementing stop
> busy_poll01 1 TINFO: Network config (local -- remote):
> busy_poll01 1 TINFO: eth1 -- eth1
> busy_poll01 1 TINFO: 192.168.1.41/24 -- 192.168.1.20/24

You run netperf with 192.168.122.248 server, LTP using the other hosts?
Could you check netperf and LTP for the same hosts/interfaces?


> busy_poll01 1 TINFO: fd00:1:1:1::1/64 -- fd00:1:1:1::2/6> busy_poll01 1 TINFO: set low latency busy poll to 50
> busy_poll01 1 TINFO: run server 'netstress -R 500000 -B /tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
> busy_poll01 1 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 500000 -d res_50 -g 44175'
> busy_poll01 1 TPASS: netstress passed, time spent '53265' ms
> busy_poll01 2 TINFO: set low latency busy poll to 0
> busy_poll01 2 TINFO: run server 'netstress -R 500000 -B /tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
> busy_poll01 2 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 500000 -d res_0 -g 46767'
> busy_poll01 2 TPASS: netstress passed, time spent '23393' ms
> busy_poll01 3 TFAIL: busy poll result is '-127' %
> <<<execution_status>>>
> initiation_status="ok"
> duration=79 termination_type=exited termination_id=1 corefile=no
> cutime=148 cstime=6930
> <<<test_end>>>
> INFO: ltp-pan reported some tests FAIL
> LTP Version: 20180118
> 
>        ###############################################################
> 
>             Done executing testcases.
>             LTP Version:  20180118
>        ###############################################################
> 
> Thanks,
> Lianwen Sun
diff mbox series

Patch

diff --git a/testcases/network/busy_poll/busy_poll01.sh 
b/testcases/network/busy_poll/busy_poll01.sh
index 3c3035600..119ae0176 100755
--- a/testcases/network/busy_poll/busy_poll01.sh
+++ b/testcases/network/busy_poll/busy_poll01.sh
@@ -60,7 +60,7 @@  for x in 50 0; do
         tst_netload -H $(tst_ipaddr rhost) -d res_$x
  done

-poll_cmp=$(( 100 - ($(cat res_50) * 100) / $(cat res_0) ))
+poll_cmp=$(( 100 - ($(cat res_0) * 100) / $(cat res_50) ))

  if [ "$poll_cmp" -lt 1 ]; then
         tst_resm TFAIL "busy poll result is '$poll_cmp' %"
diff --git a/testcases/network/busy_poll/busy_poll02.sh 
b/testcases/network/busy_poll/busy_poll02.sh
index 427857996..05b5cf8c4 100755
--- a/testcases/network/busy_poll/busy_poll02.sh
+++ b/testcases/network/busy_poll/busy_poll02.sh
@@ -51,7 +51,7 @@  for x in 50 0; do
         tst_netload -H $(tst_ipaddr rhost) -d res_$x -b $x
  done

-poll_cmp=$(( 100 - ($(cat res_50) * 100) / $(cat res_0) ))
+poll_cmp=$(( 100 - ($(cat res_0) * 100) / $(cat res_50) ))

  if [ "$poll_cmp" -lt 1 ]; then
         tst_resm TFAIL "busy poll result is '$poll_cmp' %"