使用 C # 和 RhinoMocks 进行测试驱动开发的最佳实践

为了帮助我的团队编写可测试的代码,我列出了一个简单的最佳实践列表,以使我们的 C # 代码库更具可测试性。(有些观点提到了 Rhino Mocks 的局限性,这是一个针对 C # 的模仿框架,但是这些规则可能也适用于更普遍的情况。)有人有他们遵循的最佳实践吗?

为了最大限度地提高代码的可测试性,请遵循以下规则:

  1. 首先编写测试,然后编写代码。原因: 这确保您编写可测试的代码,并且每行代码都得到为其编写的测试。

  2. 使用依赖注入设计类 原因: 你不能模仿或测试看不到的东西。

  3. 使用模型-视图-控制器或模型-视图-呈现器将 UI 代码与其行为分开。理由: 允许测试业务逻辑,同时将无法测试的部分(UI)最小化。

  4. 不要编写静态方法或类。原因: 静态方法很难或不可能隔离,Rhino Mocks 无法模仿它们。

  5. 关闭接口,而不是关闭类。理由: 使用接口澄清了对象之间的关系。接口应该定义对象从其环境中需要的服务。而且,使用 Rhino Mocks 和其他模仿框架可以很容易地模仿接口。

  6. 隔离外部依赖项 原因: 无法测试未解决的外部依赖项。

  7. 将要模拟的方法标记为虚拟方法。 原因: Rhino Mocks 无法模拟非虚拟方法。

29136 次浏览

Good list. One of the things that you might want to establish - and I can't give you much advice since I'm just starting to think about it myself - is when a class should be in a different library, namespace, nested namespaces. You might even want to figure out a list of libraries and namespaces beforehand and mandate that the team has to meet and decide to merge two/add a new one.

Oh, just thought of something that I do that you might want to also. I generally have a unit tests library with a test fixture per class policy where each test goes into a corresponding namespace. I also tend to have another library of tests (integration tests?) which is in a more BDD style. This allows me to write tests to spec out what the method should do as well as what the application should do overall.

Definitely a good list. Here are a few thoughts on it:

Write the test first, then the code.

I agree, at a high level. But, I'd be more specific: "Write a test first, then write just enough code to pass the test, and repeat." Otherwise, I'd be afraid that my unit tests would look more like integration or acceptance tests.

Design classes using dependency injection.

Agreed. When an object creates its own dependencies, you have no control over them. Inversion of Control / Dependency Injection gives you that control, allowing you to isolate the object under test with mocks/stubs/etc. This is how you test objects in isolation.

Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter.

Agreed. Note that even the presenter/controller can be tested using DI/IoC, by handing it a stubbed/mocked view and model. Check out Presenter First TDD for more on that.

Do not write static methods or classes.

Not sure I agree with this one. It is possible to unit test a static method/class without using mocks. So, perhaps this is one of those Rhino Mock specific rules you mentioned.

Program off interfaces, not classes.

I agree, but for a slightly different reason. Interfaces provide a great deal of flexibility to the software developer - beyond just support for various mock object frameworks. For example, it is not possible to support DI properly without interfaces.

Isolate external dependencies.

Agreed. Hide external dependencies behind your own facade or adapter (as appropriate) with an interface. This will allow you to isolate your software from the external dependency, be it a web service, a queue, a database or something else. This is especially important when your team doesn't control the dependency (a.k.a. external).

Mark as virtual the methods you intend to mock.

That's a limitation of Rhino Mocks. In an environment that prefers hand coded stubs over a mock object framework, that wouldn't be necessary.

And, a couple of new points to consider:

Use creational design patterns. This will assist with DI, but it also allows you to isolate that code and test it independently of other logic.

Write tests using Bill Wake's Arrange/Act/Assert technique. This technique makes it very clear what configuration is necessary, what is actually being tested, and what is expected.

Don't be afraid to roll your own mocks/stubs. Often, you'll find that using mock object frameworks makes your tests incredibly hard to read. By rolling your own, you'll have complete control over your mocks/stubs, and you'll be able to keep your tests readable. (Refer back to previous point.)

Avoid the temptation to refactor duplication out of your unit tests into abstract base classes, or setup/teardown methods. Doing so hides configuration/clean-up code from the developer trying to grok the unit test. In this case, the clarity of each individual test is more important than refactoring out duplication.

Implement Continuous Integration. Check-in your code on every "green bar." Build your software and run your full suite of unit tests on every check-in. (Sure, this isn't a coding practice, per se; but it is an incredible tool for keeping your software clean and fully integrated.)

Know the difference between fakes, mocks and stubs and when to use each.

Avoid over specifying interactions using mocks. This makes tests brittle.

If you are working with .Net 3.5, you may want to look into the Moq mocking library - it uses expression trees and lambdas to remove non-intuitive record-reply idiom of most other mocking libraries.

Check out this quickstart to see how much more intuitive your test cases become, here is a simple example:

// ShouldExpectMethodCallWithVariable
int value = 5;
var mock = new Mock<IFoo>();


mock.Expect(x => x.Duplicate(value)).Returns(() => value * 2);


Assert.AreEqual(value * 2, mock.Object.Duplicate(value));

Here's a another one that I thought of that I like to do.

If you plan to run tests from the unit test Gui as opposed to from TestDriven.Net or NAnt then I've found it easier to set the unit testing project type to console application rather than library. This allows you to run tests manually and step through them in debug mode (which the aforementioned TestDriven.Net can actually do for you).

Also, I always like to have a Playground project open for testing bits of code and ideas I'm unfamiliar with. This should not be checked into source control. Even better, it should be in a separate source control repository on the developer's machine only.

This is a very helpful post!

I would add that it is always important to understand the Context and System Under Test (SUT). Following TDD principals to the letter is much easier when you're writing new code in an environment where existing code follows the same principals. But when you're writing new code in a non TDD legacy environment you find that your TDD efforts can quickly balloon far beyond your estimates and expectations.

For some of you, who live in an entirely academic world, timelines and delivery may not be important, but in an environment where software is money, making effective use of your TDD effort is critical.

TDD is highly subject to the Law of Diminishing Marginal Return. In short, your efforts towards TDD are increasingly valuable until you hit a point of maximum return, after which, subsequent time invested into TDD has less and less value.

I tend to believe that TDD's primary value is in boundary (blackbox) as well as in occasional whitebox testing of mission-critical areas of the system.

The real reason for programming against interfaces is not to make life easier for Rhino, but to clarify the relationships between objects in the code. An interface should define a service that an object needs from its environment. A class provides a particular implementation of that service. Read Rebecca Wirfs-Brock's "Object Design" book on Roles, Responsibilities, and Collaborators.