python - Separate test cases per input files? -


most test frameworks assume "1 test = 1 python method/function", , consider test passed when function executes without raising assertions.

i'm testing compiler-like program (a program reads *.foo files , process contents), want execute same test on many input (*.foo) files. iow, test looks like:

class test(unittest.testcase):     def one_file(self, filename):         # actual test      def list_testcases(self):         # os.listdir('tests/') , filter *.foo files.      def test_all(self):         f in self.list_testcases():             one_file(f) 

my current code uses unittest python's standard library, i.e. one_file uses self.assert...(...) statements check whether test passes.

this works, in sense program succeeds/fails when code ok/buggy, i'm loosing lot of advantages of testing framework:

  • i don't relevant reporting "x failures out of y tests" nor list of passed/failed tests. (i'm planning use such system not test own development grade student's code teacher, reporting important me)

  • i don't test independence. second test runs on environment left first, , on. first failure stops testsuite: testcases coming after failure not ran @ all.

  • i feeling i'm abusing test framework: there's 1 test function automatic test discovery of unittest sounds overkill example. same code (should?) written in plain python basic assert.

an obvious alternative change code like

class test(unittest.testcase):     def one_file(self, filename):         # actual test      def test_file1(self):         one_file("first-testcase.foo")      def test_file2(self):         one_file("second-testcase.foo") 

then advantages of unittest back, but:

  • it's lot more code write.

  • it's easy "forget" testcase, i.e. create test file in tests/ , forget add python test.

i can imagine solution generate 1 method per testcase dynamically (along lines of setattr(self, 'test_file' + str(n), ...)), generate code second solution without having write hand. sounds overkill use-case doesn't seem complex.

how best of both, i.e. automatic testcase discovery (list tests/*.foo files), test independence , proper reporting?

if can use pytest test runner, pretty straightforward using parametrize decorator:

import pytest, glob  all_files = glob.glob('some/path/*.foo')  @pytest.mark.parametrize('filename', all_files) def test_one_file(filename):     # actual test 

this automatically name tests in useful way, can see files have failed:

$ py.test ================================== test session starts =================================== platform darwin -- python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0 [...] ======================================== failures ======================================== _____________________________ test_one_file[some/path/a.foo] _____________________________  filename = 'some/path/a.foo'      @pytest.mark.parametrize('filename', all_files)     def test_one_file(filename): >      assert false e      assert false  test_it.py:7: assertionerror _____________________________ test_one_file[some/path/b.foo] _____________________________  filename = 'some/path/b.foo'      @pytest.mark.parametrize('filename', all_files)     def test_one_file(filename): [...] 

Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -