Python unittest: 如何只运行测试文件的一部分?

我有一个测试文件,其中包含花费大量时间的测试(它们将计算发送到集群并等待结果)。所有这些都在特定的 TestCase 类中。

由于它们需要时间,而且不太可能中断,我希望能够选择这个测试子集是否运行(最好的方法是使用命令行参数,即“ ./tests.py --offline”或类似的东西) ,这样我就可以经常快速地运行大多数测试,并且在我有时间的时候,偶尔运行一次整个测试集。

现在,我只是使用 unittest.main()来开始测试。

62971 次浏览

Look into using a dedicated testrunner, like py.test, Nose or possibly even zope.testing. They all have command-line options for selecting tests.

Look for example at Nose.

You have basically two ways to do it:

  1. Define your own suite of tests for the class
  2. Create mock classes of the cluster connection that will return actual data.

I am a strong proponent of he second approach; a unit test should test only a very unit of code, and not complex systems (like databases or clusters). But I understand that it is not always possible; sometimes, creating mock ups is simply too expensive, or the goal of the test is really in the complex system.

Back to option (1), you can proceed in this way:

suite = unittest.TestSuite()
suite.addTest(MyUnitTestClass('quickRunningTest'))
suite.addTest(MyUnitTestClass('otherTest'))

and then passing the suite to the test runner:

unittest.TextTestRunner().run(suite)

More information on the python documentation: http://docs.python.org/library/unittest.html#testsuite-objects

The default unittest.main() uses the default test loader to make a TestSuite out of the module in which main is running.

You don't have to use this default behavior.

You can, for example, make three unittest.TestSuite instances.

  1. The "fast" subset.

    fast = TestSuite()
    fast.addTests(TestFastThis)
    fast.addTests(TestFastThat)
    
  2. The "slow" subset.

    slow = TestSuite()
    slow.addTests(TestSlowAnother)
    slow.addTests(TestSlowSomeMore)
    
  3. The "whole" set.

    alltests = unittest.TestSuite([fast, slow])
    

Note that I've adjusted the TestCase names to indicate Fast vs. Slow. You can subclass unittest.TestLoader to parse the names of classes and create multiple loaders.

Then your main program can parse command-line arguments with optparse or argparse (available since 2.7 or 3.2) to pick which suite you want to run, fast, slow or all.

Or, you can trust that sys.argv[1] is one of three values and use something as simple as this

if __name__ == "__main__":
suite = eval(sys.argv[1])  # Be careful with this line!
unittest.TextTestRunner().run(suite)

Actually, one can pass the names of the test case as sys.argv and only those cases will be tested.

For instance, suppose you have

class TestAccount(unittest.TestCase):
...


class TestCustomer(unittest.TestCase):
...


class TestShipping(unittest.TestCase):
...


account = TestAccount
customer = TestCustomer
shipping = TestShipping

You can call

python test.py account

to have only account tests, or even

$ python test.py account customer

to have both cases tested

Or you can make use of the unittest.SkipTest() function. Example, add a skipOrRunTest method to your test class like this:

def skipOrRunTest(self,testType):
#testsToRun = 'ALL'
#testsToRun = 'testType1, testType2, testType3, testType4,...etc'
#testsToRun = 'testType1'
#testsToRun = 'testType2'
#testsToRun = 'testType3'
testsToRun = 'testType4'
if ((testsToRun == 'ALL') or (testType in testsToRun)):
return True
else:
print "SKIPPED TEST because:\n\t testSuite '" + testType  + "' NOT IN testsToRun['" + testsToRun + "']"
self.skipTest("skipppy!!!")

Then add a call to this skipOrRunTest method to the very beginning of each of your unit tests like this:

def testType4(self):
self.skipOrRunTest('testType4')

To run only a single specific test, you can use:

python -m unittest test_module.TestClass.test_method

More information is here.

Since you use unittest.main() you can just run python tests.py --help to get the documentation:

Usage: tests.py [options] [test] [...]


Options:
-h, --help       Show this message
-v, --verbose    Verbose output
-q, --quiet      Minimal output
-f, --failfast   Stop on first failure
-c, --catch      Catch control-C and display results
-b, --buffer     Buffer stdout and stderr during test runs


Examples:
tests.py                               - run default set of tests
tests.py MyTestSuite                   - run suite 'MyTestSuite'
tests.py MyTestCase.testSomething      - run MyTestCase.testSomething
tests.py MyTestCase                    - run all 'test*' test methods
in MyTestCase

That is, you can simply do

python tests.py TestClass.test_method

I have found another way to select the test_* methods that I only want to run by adding an attribute to them. You basically use a metaclass to decorate the callables inside the TestCase class that have the StepDebug attribute with a unittest.skip decorator. More information is in:

Skipping all unit tests but one in Python by using decorators and metaclasses

I don't know if it is a better solution than those above I am just providing it as an option.

Goal: Get a set of test files together so they can be run as a unit, but we can still select any one of them to run by itself.

Problem: the discover method does not allow easy selection of a single test case to run.

Design: see below. This flattens the namespace so can select by TestCase class name, and leave off the the "tests1.test_core" prefix:

./run-tests TestCore.test_fmap

Code

  test_module_names = [
'tests1.test_core',
'tests2.test_other',
'tests3.test_foo',
]


loader = unittest.defaultTestLoader
if args:
alltests = unittest.TestSuite()
for a in args:
for m in test_module_names:
try:
alltests.addTest( loader.loadTestsFromName( m + '.' + a ) )
except AttributeError as e:
continue
else:
alltests = loader.loadTestsFromNames( test_module_names )


runner = unittest.TextTestRunner( verbosity = opt.verbose )
runner.run( alltests )

I tried S.Lott's answer:

if __name__ == "__main__":
suite = eval(sys.argv[1])  # Be careful with this line!
unittest.TextTestRunner().run(suite)

But that gave me the following error:

Traceback (most recent call last):
File "functional_tests.py", line 178, in <module>
unittest.TextTestRunner().run(suite)
File "/usr/lib/python2.7/unittest/runner.py", line 151, in run
test(result)
File "/usr/lib/python2.7/unittest/case.py", line 188, in __init__
testMethod = getattr(self, methodName)
TypeError: getattr(): attribute name must be string

The following worked for me:

if __name__ == "__main__":
test_class = eval(sys.argv[1])
suite = unittest.TestLoader().loadTestsFromTestCase(test_class)
unittest.TextTestRunner().run(suite)

This is the only thing that worked for me.

if __name__ == '__main__':
unittest.main(argv=sys.argv, testRunner = unittest.TextTestRunner(verbosity=2))

When I called it though I had to pass in the name of the class and test name. A little inconvenient since I don't have class and test name combination memorized.

python ./tests.py class_Name.test_30311

Removing the class name and test name runs all the tests in your file. I find this much easier to deal with than the built-in method since I don't really change my command on the CLI. Just add the parameter.

I'm doing this using a simple skipIf:

import os


SLOW_TESTS = int(os.getenv('SLOW_TESTS', '0'))


@unittest.skipIf(not SLOW_TESTS, "slow")
class CheckMyFeature(unittest.TestCase):
def runTest(self):
…

This way I need only decorate an already existing test case with this single line (no need to create test suites or similar, just that one os.getenv() call line in the beginning of my unit test file), and as a default this test gets skipped.

If I want to execute it despite being slow, I just call my script like this:

SLOW_TESTS=1 python -m unittest …

I found another solution, based on how the unittest.skip decorator works. By setting the __unittest_skip__ and __unittest_skip_why__.

Label-based

I wanted to apply a labeling system, to label some tests as quick, slow, glacier, memoryhog, cpuhog, core, and so on.

Then run all 'quick' tests, or run everything except 'memoryhog' tests, your basic whitelist / blacklist setup

Implementation

I implemented this in two parts:

  1. First add labels to tests (via a custom @testlabel class decorator)
  2. Custom unittest.TestRunner to identify which tests to skip, and modify the testlist content before executing.

Working implementation is in this gist: https://gist.github.com/fragmuffin/a245f59bdcd457936c3b51aa2ebb3f6c

(A fully working example was too long to put here.)

The result being...

$ ./runtests.py --blacklist foo
test_foo (test_things.MyTest2) ... ok
test_bar (test_things.MyTest3) ... ok
test_one (test_things.MyTests1) ... skipped 'label exclusion'
test_two (test_things.MyTests1) ... skipped 'label exclusion'


----------------------------------------------------------------------
Ran 4 tests in 0.000s


OK (skipped=2)

All MyTests1 class tests are skipped, because it has the foo label.

--whitelist also works

I created a decorator that allows for marking tests as slow tests and to skip them using an environment variable

from unittest import skip
import os


def slow_test(func):
return skipIf('SKIP_SLOW_TESTS' in os.environ, 'Skipping slow test')(func)

Now you can mark your tests as slow like this:

@slow_test
def test_my_funky_thing():
perform_test()

And skip slow tests by setting the SKIP_SLOW_TESTS environment variable:

SKIP_SLOW_TESTS=1 python -m unittest

I found this answer trying to figure out how to just run specific classes of tests; for example,

class TestCase1(unittest.TestCase):
def some_test(self):
self.assertEqual(True, True)


class TestCase2(unittest.TestCase):
def some_other_test(self):
self.assertEqual(False, False)

I wanted a quick way to comment out TestCase1 or TestCase2 that didn't involve me sweep-selecting 100+ lines of code, and I eventually landed on this:

if __name__ == "__main__":
tests = []
tests.append("TestCase1")
# tests.append("TestCase2")
unittest.main(defaultTest=tests)

It just uses unittest.main()'s defaultTest argument to specify which test classes to run.

Sometimes I run each of my test functions manually. Say my test class looks like this...

class TestStuff(unittest.TestCase):
def test1():
def test2():

Then I run this...

t = TestStuff()
t.test1()
t.test2()

(I use the Spyder IDE for data analysis, this might not be ideal for IDEs with a slicker testing tools)