Don't write ginormous tests. As the 'unit' in 'unit test' suggests, make each one as atomic and isolated as possible. If you must, create preconditions using mock objects, rather than recreating too much of the typical user environment manually.
Don't test things that obviously work. Avoid testing the classes from a third-party vendor, especially the one supplying the core APIs of the framework you code in. E.g., don't test adding an item to the vendor's Hashtable class.
Consider using a code coverage tool such as NCover to help discover edge cases you have yet to test.
Try writing the test before the implementation. Think of the test as more of a specification that your implementation will adhere to. Cf. also behavior-driven development, a more specific branch of test-driven development.
Be consistent. If you only write tests for some of your code, it's hardly useful. If you work in a team, and some or all of the others don't write tests, it's not very useful either. Convince yourself and everyone else of the importance (and time-saving properties) of testing, or don't bother.
Test should originally fail. Then you should write the code that makes them pass, otherwise you run the risk of writing a test that is bugged and always passes.
Never assume that a trivial 2 line method will work. Writing a quick unit test is the only way to prevent the missing null test, misplaced minus sign and/or subtle scoping error from biting you, inevitably when you have even less time to deal with it than now.
When a test fails, it should be immediately obvious where the problem lies. If you have to use the debugger to track down the problem, then your tests aren't granular enough. Having exactly one assertion per test helps here.
When you refactor, no tests should fail.
Tests should run so fast that you never hesitate to run them.
All tests should pass always; no non-deterministic results.
Unit tests should be well-factored, just like your production code.
EDIT: There was a comment about duplication caused by "one assertion per test". Specifically, if you have some code to set up a scenario, and then want to make multiple assertions about it, but only have one assertion per test, you might duplication the setup across multiple tests.
I don't take that approach. Instead, I use test fixtures per scenario. Here's a rough example:
[TestFixture]
public class StackTests
{
[TestFixture]
public class EmptyTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
}
[TestMethod]
[ExpectedException (typeof(Exception))]
public void PopFails()
{
_stack.Pop();
}
[TestMethod]
public void IsEmpty()
{
Assert(_stack.IsEmpty());
}
}
[TestFixture]
public class PushedOneTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
_stack.Push(7);
}
// Tests for one item on the stack...
}
}
Tests should be isolated. One test should not depend on another. Even further, a test should not rely on external systems. In other words, test your code, not the code your code depends on.You can test those interactions as part of your integration or functional tests.
Often unit tests are based on mock object or mock data.
I like to write three kind of unit tests:
"transient" unit tests: they create their own mock objects/data and test their function with it, but destroy everything and leave no trace (like no data in a test database)
"persistent" unit test: they test functions within your code creating objects/data that will be needed by more advanced function later on for their own unit test (avoiding for those advanced function to recreate every time their own set of mock objects/data)
"persistent-based" unit tests: unit tests using mock objects/data that are already there (because created in another unit test session) by the persistent unit tests.
The point is to avoid to replay everything in order to be able to test every functions.
I run the third kind very often because all mock objects/data are already there.
I run the second kind whenever my model change.
I run the first one to check the very basic functions once in a while, to check to basic regressions.
Jay Fields has a lot of good advices about writing unit tests and there is a post where he summarize the most important advices. There you will read that you should critically think about your context and judge if the advice is worth to you. You get a ton of amazing answers here, but is up to you decide which is best for your context. Try them and just refactoring if it smells bad to you.
Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)
Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)
Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic
Thorough: Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions
Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.
Independent: Very important.
Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately
Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.
Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.
Update 2010-08:
Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist.
Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.
Personally I feel that you can get pretty far by checking that you get the right results (1+1 should return 2 in a addition function), trying out all the boundary conditions you can think of (such as using two numbers of which the sum is greater than the integer max value in the add function) and forcing error conditions such as network failures.
I second the "A TRIP" answer, except that tests SHOULD rely on each other!!!
Why?
DRY - Dont Repeat Yourself - applies to testing as well! Test dependencies can help to 1) save setup time, 2) save fixture resources, and 3) pinpoint to failures. Of course, only given that your testing framework supports first-class dependencies. Otherwise, I admit, they are bad.
I haven't quite figured out how to do this for complex environments.
All the textbooks start to come unglued as your code base starts reaching
into the hundreds of 1000's or millions of lines of code.
Team interactions explode
number of test cases explode
interactions between components explodes.
time to build all the unittests becomes a significant part of the build time
an API change can ripple to hundreds of test cases. Even though the production code change was easy.
the number of events required to sequence processes into the right state increases which in turn increases test execution time.
Good architecture can control some of interaction explosion, but inevitably as
systems become more complex the automated testing system grows with it.
This is where you start having to deal with trade-offs:
only test external API otherwise refactoring internals results in significant test case rework.
setup and teardown of each test gets more complicated as an encapsulated subsystem retains more state.
nightly compilation and automated test execution grows to hours.
increased compilation and execution times means designers don't or won't run all the tests
to reduce test execution times you consider sequencing tests to take reduce set up and teardown
You also need to decide:
where do you store test cases in your code base?
how do you document your test cases?
can test fixtures be re-used to save test case maintenance?
what happens when a nightly test case execution fails? Who does the triage?
How do you maintain the mock objects? If you have 20 modules all using their own flavor of a mock logging API, changing the API ripples quickly. Not only do the test cases change but the 20 mock objects change. Those 20 modules were written over several years by many different teams. Its a classic re-use problem.
individuals and their teams understand the value of automated tests they just don't like how the other team is doing it. :-)
Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.
While this organizational strategy
has been around for a while and
called many things, the introduction
of the "AAA" acronym recently has
been a great way to get this across.
Making all your tests consistent with
AAA style makes them easy to read and
maintain.
3. Always provide a failure message with your Asserts.
Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element
processing events was raised by the XElementSerializer");
A simple yet rewarding practice that makes it obvious in your runner application what has failed. If you don't provide a message, you'll usually get something like "Expected true, was false" in your failure output, which makes you have to actually go read the test to find out what's wrong.
4. Comment the reason for the test – what’s the business assumption?
/// A layer cannot be constructed with a null gisLayer, as every function
/// in the Layer class assumes that a valid gisLayer is present.
[Test]
public void ShouldNotAllowConstructionWithANullGisLayer()
{
}
This may seem obvious, but this
practice will protect the integrity
of your tests from people who don't
understand the reason behind the test
in the first place. I've seen many
tests get removed or modified that
were perfectly fine, simply because
the person didn't understand the
assumptions that the test was
verifying.
If the test is trivial or the method
name is sufficiently descriptive, it
can be permissible to leave the
comment off.
5. Every test must always revert the state of any resource it touches
Use mocks where possible to avoid
dealing with real resources.
Cleanup must be done at the test
level. Tests must not have any
reliance on order of execution.
I use a consistent test naming convention described by Roy Osherove's Unit Test Naming standards Each method in a given test case class has the following naming style MethodUnderTest_Scenario_ExpectedResult.
The first test name section is the name of the method in the system under test.
Next is the specific scenario that is being tested.
Finally is the results of that scenario.
Each section uses Upper Camel Case and is delimited by a under score.
I have found this useful when I run the test the test are grouped by the name of the method under test. And have a convention allows other developers to understand the test intent.
I also append parameters to the Method name if the method under test have been overloaded.