到目前为止,我已经避免了测试多线程代码的噩梦,因为它似乎是一个太大的雷区。我想知道人们是如何测试依赖于线程的代码以获得成功执行的,或者人们是如何测试那些仅在两个线程以给定方式交互时才会出现的问题的?

对于今天的程序员来说,这似乎是一个非常关键的问题,恕我直言,将我们的知识集中在这个问题上是很有用的。


当前回答

近年来,在为几个项目编写线程处理代码时,我多次遇到过这个问题。我提供了一个迟来的答案,因为大多数其他答案虽然提供了替代方案,但实际上并没有回答关于测试的问题。我的答案是针对多线程代码没有替代方案的情况;为了完整性,我将讨论代码设计问题,但也将讨论单元测试。

编写可测试的多线程代码

首先要做的是将生产线程处理代码与所有执行实际数据处理的代码分开。这样,数据处理就可以作为单线程代码进行测试,多线程代码所做的唯一事情就是协调线程。

The second thing to remember is that bugs in multithreaded code are probabilistic; the bugs that manifest themselves least frequently are the bugs that will sneak through into production, will be difficult to reproduce even in production, and will thus cause the biggest problems. For this reason, the standard coding approach of writing the code quickly and then debugging it until it works is a bad idea for multithreaded code; it will result in code where the easy bugs are fixed and the dangerous bugs are still there.

相反,在编写多线程代码时,必须抱着一种从一开始就避免编写错误的态度来编写代码。如果您已经正确地删除了数据处理代码,线程处理代码应该足够小——最好只有几行,最坏也就几十行——这样您就有机会在不编写错误的情况下编写它,当然也不会编写很多错误,如果您了解线程,请慢慢来,并且小心。

为多线程代码编写单元测试

一旦尽可能仔细地编写了多线程代码,仍然值得为该代码编写测试。测试的主要目的与其说是测试高度依赖于时间的竞争条件错误(不可能重复测试这种竞争条件),不如说是测试防止这种错误的锁定策略是否允许多个线程按预期进行交互。

To properly test correct locking behavior, a test must start multiple threads. To make the test repeatable, we want the interactions between the threads to happen in a predictable order. We don't want to externally synchronize the threads in the test, because that will mask bugs that could happen in production where the threads are not externally synchronized. That leaves the use of timing delays for thread synchronization, which is the technique that I have used successfully whenever I've had to write tests of multithreaded code.

If the delays are too short, then the test becomes fragile, because minor timing differences - say between different machines on which the tests may be run - may cause the timing to be off and the test to fail. What I've typically done is start with delays that cause test failures, increase the delays so that the test passes reliably on my development machine, and then double the delays beyond that so the test has a good chance of passing on other machines. This does mean that the test will take a macroscopic amount of time, though in my experience, careful test design can limit that time to no more than a dozen seconds. Since you shouldn't have very many places requiring thread coordination code in your application, that should be acceptable for your test suite.

Finally, keep track of the number of bugs caught by your test. If your test has 80% code coverage, it can be expected to catch about 80% of your bugs. If your test is well designed but finds no bugs, there's a reasonable chance that you don't have additional bugs that will only show up in production. If the test catches one or two bugs, you might still get lucky. Beyond that, and you may want to consider a careful review of or even a complete rewrite of your thread handling code, since it is likely that code still contains hidden bugs that will be very difficult to find until the code is in production, and very difficult to fix then.

其他回答

我做过很多这样的事,的确很糟糕。

一些建议:

GroboUtils for running multiple test threads alphaWorks ConTest to instrument classes to cause interleavings to vary between iterations Create a throwable field and check it in tearDown (see Listing 1). If you catch a bad exception in another thread, just assign it to throwable. I created the utils class in Listing 2 and have found it invaluable, especially waitForVerify and waitForCondition, which will greatly increase the performance of your tests. Make good use of AtomicBoolean in your tests. It is thread safe, and you'll often need a final reference type to store values from callback classes and suchlike. See example in Listing 3. Make sure to always give your test a timeout (e.g., @Test(timeout=60*1000)), as concurrency tests can sometimes hang forever when they're broken.

清单1:

@After
public void tearDown() {
    if ( throwable != null )
        throw throwable;
}

清单2:

import static org.junit.Assert.fail;
import java.io.File;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Proxy;
import java.util.Random;
import org.apache.commons.collections.Closure;
import org.apache.commons.collections.Predicate;
import org.apache.commons.lang.time.StopWatch;
import org.easymock.EasyMock;
import org.easymock.classextension.internal.ClassExtensionHelper;
import static org.easymock.classextension.EasyMock.*;

import ca.digitalrapids.io.DRFileUtils;

/**
 * Various utilities for testing
 */
public abstract class DRTestUtils
{
    static private Random random = new Random();

/** Calls {@link #waitForCondition(Integer, Integer, Predicate, String)} with
 * default max wait and check period values.
 */
static public void waitForCondition(Predicate predicate, String errorMessage) 
    throws Throwable
{
    waitForCondition(null, null, predicate, errorMessage);
}

/** Blocks until a condition is true, throwing an {@link AssertionError} if
 * it does not become true during a given max time.
 * @param maxWait_ms max time to wait for true condition. Optional; defaults
 * to 30 * 1000 ms (30 seconds).
 * @param checkPeriod_ms period at which to try the condition. Optional; defaults
 * to 100 ms.
 * @param predicate the condition
 * @param errorMessage message use in the {@link AssertionError}
 * @throws Throwable on {@link AssertionError} or any other exception/error
 */
static public void waitForCondition(Integer maxWait_ms, Integer checkPeriod_ms, 
    Predicate predicate, String errorMessage) throws Throwable 
{
    waitForCondition(maxWait_ms, checkPeriod_ms, predicate, new Closure() {
        public void execute(Object errorMessage)
        {
            fail((String)errorMessage);
        }
    }, errorMessage);
}

/** Blocks until a condition is true, running a closure if
 * it does not become true during a given max time.
 * @param maxWait_ms max time to wait for true condition. Optional; defaults
 * to 30 * 1000 ms (30 seconds).
 * @param checkPeriod_ms period at which to try the condition. Optional; defaults
 * to 100 ms.
 * @param predicate the condition
 * @param closure closure to run
 * @param argument argument for closure
 * @throws Throwable on {@link AssertionError} or any other exception/error
 */
static public void waitForCondition(Integer maxWait_ms, Integer checkPeriod_ms, 
    Predicate predicate, Closure closure, Object argument) throws Throwable 
{
    if ( maxWait_ms == null )
        maxWait_ms = 30 * 1000;
    if ( checkPeriod_ms == null )
        checkPeriod_ms = 100;
    StopWatch stopWatch = new StopWatch();
    stopWatch.start();
    while ( !predicate.evaluate(null) ) {
        Thread.sleep(checkPeriod_ms);
        if ( stopWatch.getTime() > maxWait_ms ) {
            closure.execute(argument);
        }
    }
}

/** Calls {@link #waitForVerify(Integer, Object)} with <code>null</code>
 * for {@code maxWait_ms}
 */
static public void waitForVerify(Object easyMockProxy)
    throws Throwable
{
    waitForVerify(null, easyMockProxy);
}

/** Repeatedly calls {@link EasyMock#verify(Object[])} until it succeeds, or a
 * max wait time has elapsed.
 * @param maxWait_ms Max wait time. <code>null</code> defaults to 30s.
 * @param easyMockProxy Proxy to call verify on
 * @throws Throwable
 */
static public void waitForVerify(Integer maxWait_ms, Object easyMockProxy)
    throws Throwable
{
    if ( maxWait_ms == null )
        maxWait_ms = 30 * 1000;
    StopWatch stopWatch = new StopWatch();
    stopWatch.start();
    for(;;) {
        try
        {
            verify(easyMockProxy);
            break;
        }
        catch (AssertionError e)
        {
            if ( stopWatch.getTime() > maxWait_ms )
                throw e;
            Thread.sleep(100);
        }
    }
}

/** Returns a path to a directory in the temp dir with the name of the given
 * class. This is useful for temporary test files.
 * @param aClass test class for which to create dir
 * @return the path
 */
static public String getTestDirPathForTestClass(Object object) 
{

    String filename = object instanceof Class ? 
        ((Class)object).getName() :
        object.getClass().getName();
    return DRFileUtils.getTempDir() + File.separator + 
        filename;
}

static public byte[] createRandomByteArray(int bytesLength)
{
    byte[] sourceBytes = new byte[bytesLength];
    random.nextBytes(sourceBytes);
    return sourceBytes;
}

/** Returns <code>true</code> if the given object is an EasyMock mock object 
 */
static public boolean isEasyMockMock(Object object) {
    try {
        InvocationHandler invocationHandler = Proxy
                .getInvocationHandler(object);
        return invocationHandler.getClass().getName().contains("easymock");
    } catch (IllegalArgumentException e) {
        return false;
    }
}
}

清单3:

@Test
public void testSomething() {
    final AtomicBoolean called = new AtomicBoolean(false);
    subject.setCallback(new SomeCallback() {
        public void callback(Object arg) {
            // check arg here
            called.set(true);
        }
    });
    subject.run();
    assertTrue(called.get());
}

上周我花了大部分时间在大学图书馆学习并发代码的调试。核心问题是并发代码是不确定的。通常,学术调试可以分为三个阵营之一:

Event-trace/replay. This requires an event monitor and then reviewing the events that were sent. In a UT framework, this would involve manually sending the events as part of a test, and then doing post-mortem reviews. Scriptable. This is where you interact with the running code with a set of triggers. "On x > foo, baz()". This could be interpreted into a UT framework where you have a run-time system triggering a given test on a certain condition. Interactive. This obviously won't work in an automatic testing situation. ;)

现在,正如上面评论者所注意到的,您可以将并发系统设计成更确定的状态。然而,如果你做得不好,你就又回到了设计顺序系统的问题上。

我的建议是,专注于制定一个非常严格的设计协议,规定什么是线程,什么不是线程。如果你限制了你的接口,使元素之间的依赖最小化,那就容易多了。

祝你好运,继续解决这个问题。

测试线程代码和非常复杂的系统的另一种方法是通过模糊测试。 它不是很好,也不能找到所有的东西,但它可能是有用的,而且操作简单。

引用:

Fuzz testing or fuzzing is a software testing technique that provides random data("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted. The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior. ... Fuzz testing is often used in large software development projects that employ black box testing. These projects usually have a budget to develop test tools, and fuzz testing is one of the techniques which offers a high benefit to cost ratio. ... However, fuzz testing is not a substitute for exhaustive testing or formal methods: it can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly. Thus, fuzz testing can only be regarded as a bug-finding tool rather than an assurance of quality.

对于Java,请参阅JCIP的第12章。有一些具体的例子,可以编写确定性的多线程单元测试,以至少测试并发代码的正确性和不变量。

用单元测试“证明”线程安全要危险得多。我相信在各种平台/配置上进行自动化集成测试会更好。

我曾经有过测试线程代码的不幸任务,这绝对是我写过的最难的测试。

在编写测试时,我使用委托和事件的组合。基本上,它都是关于使用PropertyNotifyChanged事件和WaitCallback或某种轮询的ConditionalWaiter。

我不确定这是否是最好的方法,但它对我来说是有效的。