diff mbox series

[2/2] benchtests: Add a new argument -t to read throughput results

Message ID 1505756414-12857-2-git-send-email-siddhesh@sourceware.org
State New
Headers show
Series [1/2] benchtests: Memory walking benchmark for memcpy | expand

Commit Message

Siddhesh Poyarekar Sept. 18, 2017, 5:40 p.m. UTC
String benchmarks that store results as throughput rather than
latencies will show positive improvements as negative.  Add a flag to
fix the output of compare_strings.py in such cases.

	* benchtests/scripts/compare_strings.py: New option -t.
---
 benchtests/scripts/compare_strings.py | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Carlos O'Donell Sept. 21, 2017, 6:31 p.m. UTC | #1
On 09/18/2017 11:40 AM, Siddhesh Poyarekar wrote:
> String benchmarks that store results as throughput rather than
> latencies will show positive improvements as negative.  Add a flag to
> fix the output of compare_strings.py in such cases.
> 
> 	* benchtests/scripts/compare_strings.py: New option -t.
... and you wouldn't need this patch if you'd not changed to throughput.

Can't you just post-process the data to get throughput for your fancy
graphs... or better yet add fancy graph support directly to benchtests ;-)
Siddhesh Poyarekar Sept. 21, 2017, 11:55 p.m. UTC | #2
On Friday 22 September 2017 12:01 AM, Carlos O'Donell wrote:
> On 09/18/2017 11:40 AM, Siddhesh Poyarekar wrote:
>> String benchmarks that store results as throughput rather than
>> latencies will show positive improvements as negative.  Add a flag to
>> fix the output of compare_strings.py in such cases.
>>
>> 	* benchtests/scripts/compare_strings.py: New option -t.
> ... and you wouldn't need this patch if you'd not changed to throughput.
> 
> Can't you just post-process the data to get throughput for your fancy
> graphs... or better yet add fancy graph support directly to benchtests ;-)


I suppose I could add a property to the benchmark output itself like:

  "result-type": "rate" | "time"

which should be a hint to any post-processing scripts like
compare_strings.py.

BTW, there's a -g switch that generates graphs for the string benchmarks
in compare_strings.py.  One needs to exclude the simple_* string
functions to get more meaningful results since these tend to be
significantly slower, thus unnecessarily increasing the range of the Y-axis.

Siddhesh
diff mbox series

Patch

diff --git a/benchtests/scripts/compare_strings.py b/benchtests/scripts/compare_strings.py
index 65119ed..acb64b9 100755
--- a/benchtests/scripts/compare_strings.py
+++ b/benchtests/scripts/compare_strings.py
@@ -79,7 +79,7 @@  def draw_graph(f, v, ifuncs, results):
     pylab.savefig('%s-%s.png' % (f, v), bbox_inches='tight')
 
 
-def process_results(results, attrs, base_func, graph):
+def process_results(results, attrs, base_func, graph, throughput):
     """ Process results and print them
 
     Args:
@@ -110,6 +110,8 @@  def process_results(results, attrs, base_func, graph):
                 if i != base_index:
                     base = res['timings'][base_index]
                     diff = (base - t) * 100 / base
+                    if throughput:
+                        diff = -diff
                     sys.stdout.write (' (%6.2f%%)' % diff)
                 sys.stdout.write('\t')
                 i = i + 1
@@ -132,7 +134,7 @@  def main(args):
     attrs = args.attributes.split(',')
 
     results = parse_file(args.input, args.schema)
-    process_results(results, attrs, base_func, args.graph)
+    process_results(results, attrs, base_func, args.graph, args.throughput)
 
 
 if __name__ == '__main__':
@@ -152,6 +154,8 @@  if __name__ == '__main__':
                         help='IFUNC variant to set as baseline.')
     parser.add_argument('-g', '--graph', action='store_true',
                         help='Generate a graph from results.')
+    parser.add_argument('-t', '--throughput', action='store_true',
+                        help='Treat results as throughput and not time.')
 
     args = parser.parse_args()
     main(args)