diff mbox series

[conntrack-tools,1/3] tests: introduce new python-based framework for running tests

Message ID 161144773322.52227.18304556638755743629.stgit@endurance
State Accepted
Delegated to: Pablo Neira
Headers show
Series [conntrack-tools,1/3] tests: introduce new python-based framework for running tests | expand

Commit Message

Arturo Borrero Gonzalez Jan. 24, 2021, 12:22 a.m. UTC
This test suite should help us develop better tests for conntrack-tools in general and conntrackd
in particular.

The framework is composed of a runner script, written in python3, and 3 yaml files for
configuration and testcase definition:

 - scenarios.yaml: contains information on network scenarios for tests to use
 - tests.yaml: contains testcase definition
 - env.yaml: contains default values for environment variables

The test cases can be anything, from a simple command to an external script call to perform more
complex operations. See follow-up patches to know more on how this works.

The plan is to replace or call from this framework the other testsuites in this tree.

The runner script is rather simple, and it should be more or less straight forward to use it.
It requires the python3-yaml package to be installed.

For reference, here are the script options:

=== 8< ===
$ tests/cttools-testing-framework.py --help
usage: cttools-testing-framework.py [-h] [--tests-file TESTS_FILE]
				[--scenarios-file SCENARIOS_FILE]
				[--env-file ENV_FILE]
				[--single SINGLE]
				[--start-scenario START_SCENARIO]
				[--stop-scenario STOP_SCENARIO]
				[--debug]

Utility to run tests for conntrack-tools

optional arguments:
  -h, --help            show this help message and exit
  --tests-file TESTS_FILE
                        File with testcase definitions. Defaults to 'tests.yaml'
  --scenarios-file SCENARIOS_FILE
                        File with configuration scenarios for tests. Defaults to 'scenarios.yaml'
  --env-file ENV_FILE   File with environment variables for scenarios/tests. Defaults to 'env.yaml'
  --single SINGLE       Execute a single testcase and exit. Use this for developing testcases
  --start-scenario START_SCENARIO
                        Execute scenario start commands and exit. Use this for developing testcases
  --stop-scenario STOP_SCENARIO
                        Execute scenario stop commands and exit. Use this for cleanup
  --debug               debug mode
=== 8< ===

To run it, simply use:

=== 8< ===
$ cd tests/ ; sudo ./cttools-testing-framework.py
[..]
=== 8< ===

Signed-off-by: Arturo Borrero Gonzalez <arturo@netfilter.org>
---
 cttools-testing-framework.py |  263 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 263 insertions(+)

Comments

Pablo Neira Ayuso Feb. 1, 2021, 3:31 a.m. UTC | #1
Hi Arturo,

On Sun, Jan 24, 2021 at 01:22:37AM +0100, Arturo Borrero Gonzalez wrote:
> This test suite should help us develop better tests for conntrack-tools in general and conntrackd
> in particular.
> 
> The framework is composed of a runner script, written in python3, and 3 yaml files for
> configuration and testcase definition:
> 
>  - scenarios.yaml: contains information on network scenarios for tests to use
>  - tests.yaml: contains testcase definition
>  - env.yaml: contains default values for environment variables
> 
> The test cases can be anything, from a simple command to an external script call to perform more
> complex operations. See follow-up patches to know more on how this works.
> 
> The plan is to replace or call from this framework the other testsuites in this tree.
> 
> The runner script is rather simple, and it should be more or less straight forward to use it.
> It requires the python3-yaml package to be installed.
> 
> For reference, here are the script options:
> 
> === 8< ===
> $ tests/cttools-testing-framework.py --help
> usage: cttools-testing-framework.py [-h] [--tests-file TESTS_FILE]
> 				[--scenarios-file SCENARIOS_FILE]
> 				[--env-file ENV_FILE]
> 				[--single SINGLE]
> 				[--start-scenario START_SCENARIO]
> 				[--stop-scenario STOP_SCENARIO]
> 				[--debug]
> 
> Utility to run tests for conntrack-tools
> 
> optional arguments:
>   -h, --help            show this help message and exit
>   --tests-file TESTS_FILE
>                         File with testcase definitions. Defaults to 'tests.yaml'
>   --scenarios-file SCENARIOS_FILE
>                         File with configuration scenarios for tests. Defaults to 'scenarios.yaml'
>   --env-file ENV_FILE   File with environment variables for scenarios/tests. Defaults to 'env.yaml'
>   --single SINGLE       Execute a single testcase and exit. Use this for developing testcases
>   --start-scenario START_SCENARIO
>                         Execute scenario start commands and exit. Use this for developing testcases
>   --stop-scenario STOP_SCENARIO
>                         Execute scenario stop commands and exit. Use this for cleanup
>   --debug               debug mode
> === 8< ===
> 
> To run it, simply use:
> 
> === 8< ===
> $ cd tests/ ; sudo ./cttools-testing-framework.py

Automated regression test infrastructure is nice to have!

A few nitpick requests and one suggestion:

* Rename cttools-testing-framework.py to conntrackd-tests.py
* Move it to the tests/conntrackd/ folder
* Missing yaml dependency in python in my test machine

Traceback (most recent call last):
  File "cttools-testing-framework.py", line 36, in <module>
    import yaml
ModuleNotFoundError: No module named 'yaml'

this is installed from pip, right? Just a note in the commit message
is fine.

* Would it be possible to define the scenario in shell script files?
  For example, to define the "simple_stats" scenario, the YAML file
  looks like this:

- name: simple_stats
- script: shell/simple_stats.sh

The shell script takes "start" or "stop" as $1 to set up the scenario.
For developing more test, having the separated shell script might be
convenient.

This also allows to run scenario for development purpose away from the
automated regression infrastructure (although, you already thought
about this with the --start-scenario and --stop-scenario options, I
think those options are fine, I would not remove them).

Thanks !
Arturo Borrero Gonzalez Feb. 1, 2021, 10:49 a.m. UTC | #2
On 2/1/21 4:31 AM, Pablo Neira Ayuso wrote:
> 
> A few nitpick requests and one suggestion:
> 
> * Rename cttools-testing-framework.py to conntrackd-tests.py

Done.

> * Move it to the tests/conntrackd/ folder

Done.


> * Missing yaml dependency in python in my test machine
> 
> Traceback (most recent call last):
>    File "cttools-testing-framework.py", line 36, in <module>
>      import yaml
> ModuleNotFoundError: No module named 'yaml'
> 
> this is installed from pip, right? Just a note in the commit message
> is fine.

It was already present in the commit message.

I made it more clear:

=== 8< ===
On Debian machines, it requires the *python3-yaml* package to be installed as a 
dependency
=== 8< ===

> 
> * Would it be possible to define the scenario in shell script files?
>    For example, to define the "simple_stats" scenario, the YAML file
>    looks like this:
> 
> - name: simple_stats
> - script: shell/simple_stats.sh
> 
> The shell script takes "start" or "stop" as $1 to set up the scenario.
> For developing more test, having the separated shell script might be
> convenient.
> 

This is already supported:

=== 8< ===
- name: myscenario
   start:
     - ./script.sh start
   stop:
     - ./script.sh stop
=== 8< ===

> Thanks !
> 

Thanks for the review. I made the changes you requested and pushed it to the 
repository.

I plan to follow up soon with more tests.

Question: I have a few testcases that trigger bugs, segfaults etc. Would it be 
OK to create something like 'failingtestcases.yaml' and register all those bugs 
there until the get fixed? That way we have reproducible bugs until we can fix them.
Pablo Neira Ayuso Feb. 1, 2021, 5:05 p.m. UTC | #3
On Mon, Feb 01, 2021 at 11:49:02AM +0100, Arturo Borrero Gonzalez wrote:
> On 2/1/21 4:31 AM, Pablo Neira Ayuso wrote:
[...]
> > * Missing yaml dependency in python in my test machine
> > 
> > Traceback (most recent call last):
> >    File "cttools-testing-framework.py", line 36, in <module>
> >      import yaml
> > ModuleNotFoundError: No module named 'yaml'
> > 
> > this is installed from pip, right? Just a note in the commit message
> > is fine.
> 
> It was already present in the commit message.
> 
> I made it more clear:
> 
> === 8< ===
> On Debian machines, it requires the *python3-yaml* package to be installed
> as a dependency
> === 8< ===

Sorry, I overlook this.

> > * Would it be possible to define the scenario in shell script files?
> >    For example, to define the "simple_stats" scenario, the YAML file
> >    looks like this:
> > 
> > - name: simple_stats
> > - script: shell/simple_stats.sh
> > 
> > The shell script takes "start" or "stop" as $1 to set up the scenario.
> > For developing more test, having the separated shell script might be
> > convenient.
> > 
> 
> This is already supported:
> 
> === 8< ===
> - name: myscenario
>   start:
>     - ./script.sh start
>   stop:
>     - ./script.sh stop
> === 8< ===

Ok, I've sent a patch to move the netns network setup to a shell
script:

https://patchwork.ozlabs.org/project/netfilter-devel/patch/20210201170015.28217-1-pablo@netfilter.org/

> > Thanks !
> > 
> 
> Thanks for the review. I made the changes you requested and pushed it to the
> repository.
> 
> I plan to follow up soon with more tests.
>
> Question: I have a few testcases that trigger bugs, segfaults etc. Would it
> be OK to create something like 'failingtestcases.yaml' and register all
> those bugs there until the get fixed? That way we have reproducible bugs
> until we can fix them.

That's fine, but before we add more tests, please let's where to move
more inlined configurations in the yaml files to independent files
that can be reused by new tests.

Thanks.
Arturo Borrero Gonzalez Feb. 2, 2021, 10:23 a.m. UTC | #4
On 2/1/21 6:05 PM, Pablo Neira Ayuso wrote:
> That's fine, but before we add more tests, please let's where to move
> more inlined configurations in the yaml files to independent files
> that can be reused by new tests.
> 

ok!
diff mbox series

Patch

diff --git a/tests/cttools-testing-framework.py b/tests/cttools-testing-framework.py
new file mode 100755
index 0000000..f760351
--- /dev/null
+++ b/tests/cttools-testing-framework.py
@@ -0,0 +1,263 @@ 
+#!/usr/bin/env python3
+
+# (C) 2021 by Arturo Borrero Gonzalez <arturo@netfilter.org>
+
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+
+# tests.yaml file format:
+#  - name: "test 1"
+#    scenario: scenario1
+#    test:
+#      - test1 cmd1
+#      - test1 cmd2
+
+# scenarios.yaml file format:
+# - name: scenario1
+#   start:
+#     - cmd1
+#     - cmd2
+#   stop:
+#     - cmd1
+#     - cmd2
+
+# env.yaml file format:
+# - VAR1: value1
+# - VAR2: value2
+
+import os
+import sys
+import argparse
+import subprocess
+import yaml
+import logging
+
+
+def read_yaml_file(file):
+    try:
+        with open(file, "r") as stream:
+            try:
+                return yaml.safe_load(stream)
+            except yaml.YAMLError as e:
+                logging.error(e)
+                exit(2)
+    except FileNotFoundError as e:
+        logging.error(e)
+        exit(2)
+
+
+def validate_dictionary(dictionary, keys):
+    if not isinstance(dictionary, dict):
+        logging.error("not a dictionary:\n{}".format(dictionary))
+        return False
+    for key in keys:
+        if dictionary.get(key) is None:
+            logging.error("missing key {} in dictionary:\n{}".format(key, dictionary))
+            return False
+    return True
+
+
+def stage_validate_config(args):
+    scenarios_dict = read_yaml_file(args.scenarios_file)
+    for definition in scenarios_dict:
+        if not validate_dictionary(definition, ["name", "start", "stop"]):
+            logging.error("couldn't validate file {}".format(args.scenarios_file))
+            return False
+
+    logging.debug("{} seems valid".format(args.scenarios_file))
+    ctx.scenarios_dict = scenarios_dict
+
+    tests_dict = read_yaml_file(args.tests_file)
+    for definition in tests_dict:
+        if not validate_dictionary(definition, ["name", "scenario", "test"]):
+            logging.error("couldn't validate file {}".format(args.tests_file))
+            return False
+
+    logging.debug("{} seems valid".format(args.tests_file))
+    ctx.tests_dict = tests_dict
+
+    env_list = read_yaml_file(args.env_file)
+    if not isinstance(env_list, list):
+        logging.error("couldn't validate file {}".format(args.env_file))
+        return False
+
+    # set env to default values if not overridden when calling this very script
+    for entry in env_list:
+        for key in entry:
+            os.environ[key] = os.getenv(key, entry[key])
+
+
+def cmd_run(cmd):
+    logging.debug("running command: {}".format(cmd))
+    r = subprocess.run(cmd, shell=True)
+    if r.returncode != 0:
+        logging.warning("failed command: {}".format(cmd))
+    return r.returncode
+
+
+def scenario_get(name):
+    for n in ctx.scenarios_dict:
+        if n["name"] == name:
+            return n
+
+    logging.error("couldn't find a definition for scenario '{}'".format(name))
+    exit(1)
+
+
+def scenario_start(scenario):
+    for cmd in scenario["start"]:
+        if cmd_run(cmd) == 0:
+            continue
+
+        logging.warning("--- failed scenario: {}".format(scenario["name"]))
+        ctx.counter_scenario_failed += 1
+        ctx.skip_current_test = True
+        return
+
+
+def scenario_stop(scenario):
+    for cmd in scenario["stop"]:
+        cmd_run(cmd)
+
+
+def test_get(name):
+    for n in ctx.tests_dict:
+        if n["name"] == name:
+            return n
+
+    logging.error("couldn't find a definition for test '{}'".format(name))
+    exit(1)
+
+
+def _test_run(test_definition):
+    if ctx.skip_current_test:
+        return
+
+    for cmd in test_definition["test"]:
+        if cmd_run(cmd) == 0:
+            continue
+
+        logging.warning("--- failed test: {}".format(test_definition["name"]))
+        ctx.counter_test_failed += 1
+        return
+
+    logging.info("--- passed test: {}".format(test_definition["name"]))
+    ctx.counter_test_ok += 1
+
+
+def test_run(test_definition):
+    scenario = scenario_get(test_definition["scenario"])
+
+    logging.info("--- running test: {}".format(test_definition["name"]))
+
+    scenario_start(scenario)
+    _test_run(test_definition)
+    scenario_stop(scenario)
+
+
+def stage_run_tests(args):
+    if args.start_scenario:
+        scenario_start(scenario_get(args.start_scenario))
+        return
+
+    if args.stop_scenario:
+        scenario_stop(scenario_get(args.stop_scenario))
+        return
+
+    if args.single:
+        test_run(test_get(args.single))
+        return
+
+    for test_definition in ctx.tests_dict:
+        ctx.skip_current_test = False
+        test_run(test_definition)
+
+
+def stage_report():
+    logging.info("---")
+    logging.info("--- finished")
+    total = ctx.counter_test_ok + ctx.counter_test_failed + ctx.counter_scenario_failed
+    logging.info("--- passed tests: {}".format(ctx.counter_test_ok))
+    logging.info("--- failed tests: {}".format(ctx.counter_test_failed))
+    logging.info("--- scenario failure: {}".format(ctx.counter_scenario_failed))
+    logging.info("--- total tests: {}".format(total))
+
+
+def parse_args():
+    description = "Utility to run tests for conntrack-tools"
+    parser = argparse.ArgumentParser(description=description)
+    parser.add_argument(
+        "--tests-file",
+        default="tests.yaml",
+        help="File with testcase definitions. Defaults to '%(default)s'",
+    )
+    parser.add_argument(
+        "--scenarios-file",
+        default="scenarios.yaml",
+        help="File with configuration scenarios for tests. Defaults to '%(default)s'",
+    )
+    parser.add_argument(
+        "--env-file",
+        default="env.yaml",
+        help="File with environment variables for scenarios/tests. Defaults to '%(default)s'",
+    )
+    parser.add_argument(
+        "--single",
+        help="Execute a single testcase and exit. Use this for developing testcases",
+    )
+    parser.add_argument(
+        "--start-scenario",
+        help="Execute scenario start commands and exit. Use this for developing testcases",
+    )
+    parser.add_argument(
+        "--stop-scenario",
+        help="Execute scenario stop commands and exit. Use this for cleanup",
+    )
+    parser.add_argument(
+        "--debug",
+        action="store_true",
+        help="debug mode",
+    )
+
+    return parser.parse_args()
+
+
+class Context:
+    def __init__(self):
+        self.scenarios_dict = None
+        self.tests_dict = None
+        self.counter_test_failed = 0
+        self.counter_test_ok = 0
+        self.counter_scenario_failed = 0
+        self.skip_current_test = False
+
+
+# global data
+ctx = Context()
+
+
+def main():
+    args = parse_args()
+
+    logging_format = "[%(filename)s] %(levelname)s: %(message)s"
+    if args.debug:
+        logging_level = logging.DEBUG
+    else:
+        logging_level = logging.INFO
+    logging.basicConfig(format=logging_format, level=logging_level, stream=sys.stdout)
+
+    if os.geteuid() != 0:
+        logging.error("root required")
+        exit(1)
+
+    stage_validate_config(args)
+    stage_run_tests(args)
+    stage_report()
+
+
+if __name__ == "__main__":
+    main()