The JUnit FAQ claims e.g.
The general philosophy is this: if it can’t break on its own, it’s too simple to break.
First example is the
getX()
method. Suppose thegetX()
method only answers the value of an instance variable. In that case,getX()
cannot break unless either the compiler or the interpreter is also broken. For that reason, don’t testgetX()
; there is no benefit.(http://junit.sourceforge.net/doc/faq/faq.htme, retrieved 2009-05-24.)
The principle behind this is sound: Time is limited and a good writer of test
cases focuses on the code that is more likely to break in the future (or
already be broken). However, the formulation used is too far-going and naive:
One of the main reasons to use test cases is to allow for regression tests,
and to be able to verify that subclasses fulfill the promises of their parents;
and there is no guarantee that future implementations will be similarly simple.
Assume, for a trivial example, that there are two independent implementations
of a circle, CA and CB. CA has two attributes, radius and area, which are both
set at initiation. CB has one attribute, radius. Now an adapter is written that
makes CB objects confirm to the CA interface. Supplying a getRadius
method is trivial; however, getArea
needs an explicit calculation, and
in the absense of a consistency check on the area and radius of CA objects,
this is a potential source of problems. With a consistency check, however, an
error would be easy to discover.
Why not just have the developer of the adapter add this test? Actually, this is a valid option; however, there are many lazy developers, there are very many who are on a too tight schedule, and the possibility cannot be ruled out that he assumes that the test is already present. As a rule of thumb: Be defensive in programming now, do not rely on someone else being defensive later on. Notably, using IDE templates, a shell script, or similar, test cases for trivial methods is often sufficiently low in effort that they are worth while.
That being said, whether to test e.g. a trivial getter is a judgement call: Sometimes it is a good decision, sometimes not. The point of the above is not to advocate testing everything that could conceivably break at some unspecified time in the far away future, but rather to point out the naivete of the attitude “If it cannot break today, do not test it.”—If a to-be-married couple has no intention of ever getting a divorce, does that mean that a pre-nup is waste of time and money?
For some reason, my test cases tend to have more bugs than my code. Correspondingly, when a (newly written) test case fails, my first step is to check the test case for errors.
From anecdotal evidence, I have the impression that many others have the same distribution of errors, and would do well to follow the same practice.
Even for those with a different distribution, the possibility of a test case error should always be born in mind.
All non-trivial code is buggy: A test that finds one these bugs is succesful, and has brought the developers a little closer to the unreachable ideal of a bug-free software. A test that does not find a bug has as its only benefit that it may discover future bugs.
Correspondingly, a good developer learns to think positively about newly found bugs: It is not “Shit! Another bug that I have to solve now.”, but “Good! One bug less that will cause me problems after the release.”. Notably, if bug after bug pops up, then this is a sign that the overall quality is too low and a thorough review and/or re-write should be made—to safe time in the mid- and long term.
Correspondingly, a good developer does not fall for the trap of writing “kind” test cases that are unlikely to find bugs, but he tries to be as mean as possible.
Correspondingly, a good developer will have a philosophy of “better one test case too many, than one too few”. (It is possible to write too many test cases, but when in doubt about any indiviual test case, prefer to write it.)