我有一些测试数据,想为每个项目创建一个单元测试。我的第一个想法是这样做的:
import unittest
l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]
class TestSequence(unittest.TestCase):
def testsample(self):
for name, a,b in l:
print "test", name
self.assertEqual(a,b)
if __name__ == '__main__':
unittest.main()
这样做的缺点是它在一个测试中处理所有数据。我想在飞行中为每个项目生成一个测试。有什么建议吗?
使用ddt库。它为测试方法添加了简单的装饰器:
import unittest
from ddt import ddt, data
from mycode import larger_than_two
@ddt
class FooTestCase(unittest.TestCase):
@data(3, 4, 12, 23)
def test_larger_than_two(self, value):
self.assertTrue(larger_than_two(value))
@data(1, -3, 2, 0)
def test_not_larger_than_two(self, value):
self.assertFalse(larger_than_two(value))
这个库可以用pip安装。它不需要nose,并且与标准库unittest模块一起出色地工作。
前几天我在查看radon的源代码时遇到了ParamUnittest (GitHub存储库中的使用示例)。它应该与扩展TestCase的其他框架一起工作(比如Nose)。
这里有一个例子:
import unittest
import paramunittest
@paramunittest.parametrized(
('1', '2'),
#(4, 3), <---- Uncomment to have a failing test
('2', '3'),
(('4', ), {'b': '5'}),
((), {'a': 5, 'b': 6}),
{'a': 5, 'b': 6},
)
class TestBar(TestCase):
def setParameters(self, a, b):
self.a = a
self.b = b
def testLess(self):
self.assertLess(self.a, self.b)
我在一种非常特殊的参数化测试风格上遇到了麻烦。我们所有的Selenium测试都可以在本地运行,但它们也应该能够在SauceLabs上的多个平台上远程运行。基本上,我想要使用大量已经编写好的测试用例,并用尽可能少的代码更改参数化它们。此外,我需要能够将参数传递到setUp方法中,这是我在其他地方没有看到的任何解决方案。
以下是我想到的:
import inspect
import types
test_platforms = [
{'browserName': "internet explorer", 'platform': "Windows 7", 'version': "10.0"},
{'browserName': "internet explorer", 'platform': "Windows 7", 'version': "11.0"},
{'browserName': "firefox", 'platform': "Linux", 'version': "43.0"},
]
def sauce_labs():
def wrapper(cls):
return test_on_platforms(cls)
return wrapper
def test_on_platforms(base_class):
for name, function in inspect.getmembers(base_class, inspect.isfunction):
if name.startswith('test_'):
for platform in test_platforms:
new_name = '_'.join(list([name, ''.join(platform['browserName'].title().split()), platform['version']]))
new_function = types.FunctionType(function.__code__, function.__globals__, new_name,
function.__defaults__, function.__closure__)
setattr(new_function, 'platform', platform)
setattr(base_class, new_name, new_function)
delattr(base_class, name)
return base_class
With this, all I had to do was add a simple decorator @sauce_labs() to each regular old TestCase, and now when running them, they're wrapped up and rewritten, so that all the test methods are parameterized and renamed. LoginTests.test_login(self) runs as LoginTests.test_login_internet_explorer_10.0(self), LoginTests.test_login_internet_explorer_11.0(self), and LoginTests.test_login_firefox_43.0(self), and each one has the parameter self.platform to decide what browser/platform to run against, even in LoginTests.setUp, which is crucial for my task since that's where the connection to SauceLabs is initialized.
无论如何,我希望这对那些希望对他们的测试进行类似的“全局”参数化的人有所帮助!
使用unittest(从3.4开始)
从Python 3.4开始,标准库unittest包具有subTest上下文管理器。
参见文档:
26.4.7. 使用子测试区分测试迭代
分测验
例子:
from unittest import TestCase
param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]
class TestDemonstrateSubtest(TestCase):
def test_works_as_expected(self):
for p1, p2 in param_list:
with self.subTest():
self.assertEqual(p1, p2)
你也可以给subTest()指定一个自定义消息和参数值:
with self.subTest(msg="Checking if p1 equals p2", p1=p1, p2=p2):
用鼻子
鼻测试框架支持这一点。
示例(下面的代码是包含测试的文件的全部内容):
param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]
def test_generator():
for params in param_list:
yield check_em, params[0], params[1]
def check_em(a, b):
assert a == b
nosetests命令输出信息如下:
> nosetests -v
testgen.test_generator('a', 'a') ... ok
testgen.test_generator('a', 'b') ... FAIL
testgen.test_generator('b', 'b') ... ok
======================================================================
FAIL: testgen.test_generator('a', 'b')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/case.py", line 203, in runTest
self.test(*self.arg)
File "testgen.py", line 7, in check_em
assert a == b
AssertionError
----------------------------------------------------------------------
Ran 3 tests in 0.006s
FAILED (failures=1)