断言失败时继续使用 Python 的单元测试

编辑: 切换到一个更好的例子,并澄清为什么这是一个真正的问题。

我希望用 Python 编写单元测试,当断言失败时继续执行,这样我就可以在一个测试中看到多个失败。例如:

class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make  # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3  # Typo: should be 4.


class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(car.make, make)
self.assertEqual(car.model, model)  # Failure!
self.assertTrue(car.has_seats)
self.assertEqual(car.wheel_count, 4)  # Failure!

这里,测试的目的是确保 Car 的 __init__正确设置字段。我可以把它分成四个方法(这通常是个好主意) ,但是在这种情况下,我认为将它作为一个单独的方法来测试一个单独的概念(“对象被正确地初始化”)更具可读性。

如果我们假设最好不要在这里分解这个方法,那么我就有了一个新问题: 我不能一次看到所有的错误。当我修复 model错误并重新运行测试时,就会出现 wheel_count错误。当我第一次运行测试时,这将节省我查看这两个错误的时间。

作为比较,Google 的 C + + 单元测试框架 区分介于非致命的 EXPECT_*断言和致命的 ASSERT_*断言之间:

断言是成对出现的,它们测试相同的内容,但是对当前函数有不同的影响。ASSERT _ * 版本在失败时会生成致命的失败,并中止当前函数。EXPECT _ * 版本会生成非致命故障,这些故障不会中止当前函数。通常 EXPECT _ * 是首选的,因为它们允许在测试中报告多个失败。但是,如果在有问题的断言失败时继续使用 ASSERT _ * 没有意义,那么应该使用 ASSERT _ * 。

有没有办法在 Python 的 unittest中获得类似 EXPECT_*的行为?如果在 unittest中没有,那么是否有另一个 Python 单元测试框架支持这种行为?


顺便说一句,我很好奇有多少实际测试可以从非致命断言中受益,所以我看了一些 代码示例(编辑于2014-08-19,使用搜索代码代替 Google Code Search,RIP)。从第一页随机选择的10个结果中,都包含在同一个测试方法中作出多个独立断言的测试。所有这些都将受益于非致命断言。

59083 次浏览

It is considered an anti-pattern to have multiple asserts in a single unit test. A single unit test is expected to test only one thing. Perhaps you are testing too much. Consider splitting this test up into multiple tests. This way you can name each test properly.

Sometimes however, it is okay to check multiple things at the same time. For instance when you are asserting properties of the same object. In that case you are in fact asserting whether that object is correct. A way to do this is to write a custom helper method that knows how to assert on that object. You can write that method in such a way that it shows all failing properties or for instance shows the complete state of the expected object and the complete state of the actual object when an assert fails.

I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.

I prefer to stick to one assertion per test function (or more specifically asserting one concept per test) and would rewrite test_addition() as four separate test functions. This would give more useful information on failure, viz:

.FF.
======================================================================
FAIL: test_addition_with_two_negatives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 10, in test_addition_with_two_negatives
self.assertEqual(-1 + (-1), -1)
AssertionError: -2 != -1


======================================================================
FAIL: test_addition_with_two_positives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_addition.py", line 6, in test_addition_with_two_positives
self.assertEqual(1 + 1, 3)  # Failure!
AssertionError: 2 != 3


----------------------------------------------------------------------
Ran 4 tests in 0.000s


FAILED (failures=2)

If you decide that this approach isn't for you, you may find this answer helpful.

Update

It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for make and one for model. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.

The second concept is more questionable... You're testing whether some default values are initialised. Why? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?).

Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.

FF
======================================================================
FAIL: test_creation_defaults (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 25, in test_creation_defaults
self.assertEqual(self.car.wheel_count, 4)  # Failure!
AssertionError: 3 != 4


======================================================================
FAIL: test_creation_parameters (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_car.py", line 20, in test_creation_parameters
self.assertEqual(self.car.model, self.model)  # Failure!
AssertionError: 'Ford' != 'Model T'


----------------------------------------------------------------------
Ran 2 tests in 0.000s


FAILED (failures=2)

Do each assert in a separate method.

class MathTest(unittest.TestCase):
def test_addition1(self):
self.assertEqual(1 + 0, 1)


def test_addition2(self):
self.assertEqual(1 + 1, 3)


def test_addition3(self):
self.assertEqual(1 + (-1), 0)


def test_addition4(self):
self.assertEqaul(-1 + (-1), -1)

What you'll probably want to do is derive unittest.TestCase since that's the class that throws when an assertion fails. You will have to re-architect your TestCase to not throw (maybe keep a list of failures instead). Re-architecting stuff can cause other issues that you would have to resolve. For example you may end up needing to derive TestSuite to make changes in support of the changes made to your TestCase.

Another way to have non-fatal assertions is to capture the assertion exception and store the exceptions in a list. Then assert that that list is empty as part of the tearDown.

import unittest


class Car(object):
def __init__(self, make, model):
self.make = make
self.model = make  # Copy and paste error: should be model.
self.has_seats = True
self.wheel_count = 3  # Typo: should be 4.


class CarTest(unittest.TestCase):
def setUp(self):
self.verificationErrors = []


def tearDown(self):
self.assertEqual([], self.verificationErrors)


def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
try: self.assertEqual(car.make, make)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.model, model)  # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertTrue(car.has_seats)
except AssertionError, e: self.verificationErrors.append(str(e))
try: self.assertEqual(car.wheel_count, 4)  # Failure!
except AssertionError, e: self.verificationErrors.append(str(e))


if __name__ == "__main__":
unittest.main()

One option is assert on all the values at once as a tuple.

For example:

class CarTest(unittest.TestCase):
def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
self.assertEqual(
(car.make, car.model, car.has_seats, car.wheel_count),
(make, model, True, 4))

The output from this tests would be:

======================================================================
FAIL: test_init (test.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\temp\py_mult_assert\test.py", line 17, in test_init
(make, model, True, 4))
AssertionError: Tuples differ: ('Ford', 'Ford', True, 3) != ('Ford', 'Model T', True, 4)


First differing element 1:
Ford
Model T


- ('Ford', 'Ford', True, 3)
?           ^ -          ^


+ ('Ford', 'Model T', True, 4)
?           ^  ++++         ^

This shows that both the model and the wheel count are incorrect.

I liked the approach by @Anthony-Batchelor, to capture the AssertionError exception. But a slight variation to this approach using decorators and also a way to report the tests cases with pass/fail.

#!/usr/bin/env python
# -*- coding: utf-8 -*-


import unittest


class UTReporter(object):
'''
The UT Report class keeps track of tests cases
that have been executed.
'''
def __init__(self):
self.testcases = []
print "init called"


def add_testcase(self, testcase):
self.testcases.append(testcase)


def display_report(self):
for tc in self.testcases:
msg = "=============================" + "\n" + \
"Name: " + tc['name'] + "\n" + \
"Description: " + str(tc['description']) + "\n" + \
"Status: " + tc['status'] + "\n"
print msg


reporter = UTReporter()


def assert_capture(*args, **kwargs):
'''
The Decorator defines the override behavior.
unit test functions decorated with this decorator, will ignore
the Unittest AssertionError. Instead they will log the test case
to the UTReporter.
'''
def assert_decorator(func):
def inner(*args, **kwargs):
tc = {}
tc['name'] = func.__name__
tc['description'] = func.__doc__
try:
func(*args, **kwargs)
tc['status'] = 'pass'
except AssertionError:
tc['status'] = 'fail'
reporter.add_testcase(tc)
return inner
return assert_decorator






class DecorateUt(unittest.TestCase):


@assert_capture()
def test_basic(self):
x = 5
self.assertEqual(x, 4)


@assert_capture()
def test_basic_2(self):
x = 4
self.assertEqual(x, 4)


def main():
#unittest.main()
suite = unittest.TestLoader().loadTestsFromTestCase(DecorateUt)
unittest.TextTestRunner(verbosity=2).run(suite)


reporter.display_report()




if __name__ == '__main__':
main()

Output from console:

(awsenv)$ ./decorators.py
init called
test_basic (__main__.DecorateUt) ... ok
test_basic_2 (__main__.DecorateUt) ... ok


----------------------------------------------------------------------
Ran 2 tests in 0.000s


OK
=============================
Name: test_basic
Description: None
Status: fail


=============================
Name: test_basic_2
Description: None
Status: pass

expect is very useful in gtest. This is python way in gist, and code:

import sys
import unittest




class TestCase(unittest.TestCase):
def run(self, result=None):
if result is None:
self.result = self.defaultTestResult()
else:
self.result = result


return unittest.TestCase.run(self, result)


def expect(self, val, msg=None):
'''
Like TestCase.assert_, but doesn't halt the test.
'''
try:
self.assert_(val, msg)
except:
self.result.addFailure(self, sys.exc_info())


def expectEqual(self, first, second, msg=None):
try:
self.failUnlessEqual(first, second, msg)
except:
self.result.addFailure(self, sys.exc_info())


expect_equal = expectEqual


assert_equal = unittest.TestCase.assertEqual
assert_raises = unittest.TestCase.assertRaises




test_main = unittest.main

There is a soft assertion package in PyPI called softest that will handle your requirements. It works by collecting the failures, combining exception and stack trace data, and reporting it all as part of the usual unittest output.

For instance, this code:

import softest


class ExampleTest(softest.TestCase):
def test_example(self):
# be sure to pass the assert method object, not a call to it
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
# self.soft_assert(self.assertEqual('Worf', 'wharf', 'Klingon is not ship receptacle')) # will not work as desired
self.soft_assert(self.assertTrue, True)
self.soft_assert(self.assertTrue, False)


self.assert_all()


if __name__ == '__main__':
softest.main()

...produces this console output:

======================================================================
FAIL: "test_example" (ExampleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\...\softest_test.py", line 14, in test_example
self.assert_all()
File "C:\...\softest\case.py", line 138, in assert_all
self.fail(''.join(failure_output))
AssertionError: ++++ soft assert failure details follow below ++++


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The following 2 failures were found in "test_example" (ExampleTest):
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Failure 1 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 10, in test_example
self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 1203, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 670, in fail
raise self.failureException(msg)
AssertionError: 'Worf' != 'wharf'
- Worf
+ wharf
: Klingon is not ship receptacle


+--------------------------------------------------------------------+
Failure 2 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
File "C:\...\softest_test.py", line 12, in test_example
self.soft_assert(self.assertTrue, False)
File "C:\...\softest\case.py", line 84, in soft_assert
assert_method(*arguments, **keywords)
File "C:\...\Python\Python36-32\lib\unittest\case.py", line 682, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true




----------------------------------------------------------------------
Ran 1 test in 0.000s


FAILED (failures=1)

NOTE: I created and maintain softest.

I had a problem with the answer from @Anthony Batchelor because it would have forced me to use try...catch inside my unit tests. Instead, I encapsulated the try...catch logic in an override of the TestCase.assertEqual method. Here is the code:

import unittest
import traceback


class AssertionErrorData(object):


def __init__(self, stacktrace, message):
super(AssertionErrorData, self).__init__()
self.stacktrace = stacktrace
self.message = message


class MultipleAssertionFailures(unittest.TestCase):


def __init__(self, *args, **kwargs):
self.verificationErrors = []
super(MultipleAssertionFailures, self).__init__( *args, **kwargs )


def tearDown(self):
super(MultipleAssertionFailures, self).tearDown()


if self.verificationErrors:
index = 0
errors = []


for error in self.verificationErrors:
index += 1
errors.append( "%s\nAssertionError %s: %s" % (
error.stacktrace, index, error.message ) )


self.fail( '\n\n' + "\n".join( errors ) )
self.verificationErrors.clear()


def assertEqual(self, goal, results, msg=None):


try:
super( MultipleAssertionFailures, self ).assertEqual( goal, results, msg )


except unittest.TestCase.failureException as error:
goodtraces = self._goodStackTraces()
self.verificationErrors.append(
AssertionErrorData( "\n".join( goodtraces[:-2] ), error ) )


def _goodStackTraces(self):
"""
Get only the relevant part of stacktrace.
"""
stop = False
found = False
goodtraces = []


# stacktrace = traceback.format_exc()
# stacktrace = traceback.format_stack()
stacktrace = traceback.extract_stack()


# https://stackoverflow.com/questions/54499367/how-to-correctly-override-testcase
for stack in stacktrace:
filename = stack.filename


if found and not stop and \
not filename.find( 'lib' ) < filename.find( 'unittest' ):
stop = True


if not found and filename.find( 'lib' ) < filename.find( 'unittest' ):
found = True


if stop and found:
stackline = '  File "%s", line %s, in %s\n    %s' % (
stack.filename, stack.lineno, stack.name, stack.line )
goodtraces.append( stackline )


return goodtraces


# class DummyTestCase(unittest.TestCase):
class DummyTestCase(MultipleAssertionFailures):


def setUp(self):
self.maxDiff = None
super(DummyTestCase, self).setUp()


def tearDown(self):
super(DummyTestCase, self).tearDown()


def test_function_name(self):
self.assertEqual( "var", "bar" )
self.assertEqual( "1937", "511" )


if __name__ == '__main__':
unittest.main()

Result output:

F
======================================================================
FAIL: test_function_name (__main__.DummyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\User\Downloads\test.py", line 77, in tearDown
super(DummyTestCase, self).tearDown()
File "D:\User\Downloads\test.py", line 29, in tearDown
self.fail( '\n\n' + "\n\n".join( errors ) )
AssertionError:


File "D:\User\Downloads\test.py", line 80, in test_function_name
self.assertEqual( "var", "bar" )
AssertionError 1: 'var' != 'bar'
- var
? ^
+ bar
? ^
:


File "D:\User\Downloads\test.py", line 81, in test_function_name
self.assertEqual( "1937", "511" )
AssertionError 2: '1937' != '511'
- 1937
+ 511
:

More alternative solutions for the correct stacktrace capture could be posted on How to correctly override TestCase.assertEqual(), producing the right stacktrace?

I realize this question was asked literally years ago, but there are now (at least) two Python packages that allow you to do this.

One is softest: https://pypi.org/project/softest/

The other is Python-Delayed-Assert: https://github.com/pr4bh4sh/python-delayed-assert

I haven't used either, but they look pretty similar to me.

Since Python 3.4 you can also use subtests:

def test_init(self):
make = "Ford"
model = "Model T"
car = Car(make=make, model=model)
with self.subTest(msg='Car.make check'):
self.assertEqual(car.make, make)
with self.subTest(msg='Car.model check'):
self.assertEqual(car.model, model)
with self.subTest(msg='Car.has_seats check'):
self.assertTrue(car.has_seats)
with self.subTest(msg='Car.wheel_count check'):
self.assertEqual(car.wheel_count, 4)

(msg parameter is used to more easily determine which test failed.)

Output:

======================================================================
FAIL: test_init (__main__.CarTest) [Car.model check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 23, in test_init
self.assertEqual(car.model, model)
AssertionError: 'Ford' != 'Model T'
- Ford
+ Model T




======================================================================
FAIL: test_init (__main__.CarTest) [Car.wheel_count check]
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 27, in test_init
self.assertEqual(car.wheel_count, 4)
AssertionError: 3 != 4


----------------------------------------------------------------------
Ran 1 test in 0.001s


FAILED (failures=2)

I think I found a solution that works. Using selenium, I was able to store a list of text values into a list. Loop through the list until I found an item that contains that text I needed. Then using the if else statement, I used a 'break' statement when the item was found and I assigned a specific value to a dummy variable once the value was found. Then I asserted that value outside of the for-loop.

    elements = self.driver.find_elements(*element)
print(elements)
global y
for element in elements:
print(element.text)
t = element.text
time_strip = combined_time[:-2]  #test_case specific code
y = t.__contains__(time_strip)   #test_case specific code
print(y)
if y == True:
global z
z = "banana"
break
else:
z = "apple"
if z == "banana":
print(z)
assert 2 == 2
else:
print(z)
assert 2 == 1