Thursday, January 12, 2012

Flipping out over flipping the bit

I like Uncle Bob Martin. We're nearly finished his book Clean Code in our work study group and I only disagree with ~5% of what he says. ;)

However he's flipped out over Flipping the Bit. Referencing an article by Tim Fischer, UB has decided that because Fischer calls into question the value of doing Unit Tests 100% of the time, Fischer doesn't value testing (I think he does, he just has bad design which makes his testing life harder).

Unit Tests don't obviously equal TDD, because the T of course stands for Tests, but as we know, there are many levels of testing. Unit, integration, end to end, etc. I'm all for TDD, quite strongly in fact that I've occasionally bordered on being a zealot. Here I totally agree with UB's points about testing (TDD to be specific) as bringing higher quality, "cheaper" code into existence. Fischer has it totally wrong that "[unit] tests are little-used for the development of enterprise applications." In my organisation we write Unit Tests all the time (as part of TDD), and they provide a high degree of feedback and value to the project. His point about the cost (purely monetary) of writing Unit Tests is true from a mathematical perspective, however it's a cost worth paying.

Point (1) of UB's list is totally justified in being there. Reading Fischer's post on can easily think that he hasn't grasped the point of TDD because his examples talk about writing the tests after the implementation. UB is right to smack Fisher on the nose about this one.

Sadly in both these posts there's kernels of truth woven in there, and I think UB missed the nugget in Fischer's post which leads to UB's second (erroneous) point:

Unit tests don’t find all bugs because many bugs are integration bugs, not bugs in the unit-tested components.

Why is he wrong? Because Unit Tests != TDD. The jump there was astonishing to my mind. Superman couldn't have jumped that gap better! We do have to justify the existence of test code - but to ourselves not to higher ups or the Unit Test Compliance Squads. What value are these tests adding? How are they proving the correctness of my program and creating/improving my design/architecture?

If you're writing an Adapter (from the Growing Object Oriented Software Guided By Tests book) then Unit Tests add little value to ensuring that the Adapter works correctly because the Adapter is so tightly coupled to the Adaptee that you'd have to essentially replicate the Adaptee in fakes and stubs. Here any bugs that happen in the Adapter will probably not show up in Unit Tests, because those bugs are signs that the developer probably misunderstood the behaviour of the Adaptee for a particular scenario, and therefore would have coded the fake/stub to be incorrect. You've got a broken stub, an incorrect test, but a green light.

An example is an DAO. It is designed to abstract away access to the DB and is tightly coupled to the underlying DB technology (JPA, JDBC, etc). You don't want to Unit Test that. Integration tests add far more value/feedback with less code to maintain. Add in an inmemory DB and you've got easy, fastish tests that have found bugs in my code far too many times than I'd like. Unit Tests at the Adapter level have only in the end been deleted from my teams codebase because they take time (therefore money) to maintain, replicate the testing logic of the Integration Tests and give little feedback about what's going on down there. That's in line with Fischer's gripes. The costs of the tests outweigh the benefits.

Where Fischer goes seriously wrong is that he doesn't add in all forms of testing into his money calculations, and doesn't realise that if you don't do TDD properly (where Unit Tests do play an integral part) you'll spend more money.

His pretty picture is flawed in that SomeMethod() is business logic (a Port) that uses data from several sources. However a Port should never get the data directly; it should always go via an Adapter ("Tell don't ask", SOLID, etc all show how good design ends up with this result). Hence SomeMethod() can be Unit Tested to the Nth degree covering every scenario conceivable because the Adapters can be mocked (which we own and understand hopefully), while the Adapters are Integration Tested. Other wise the amount of code required to setup what is essentially a Unit Test (because we're focused on the SomeMethod() unit) for every scenario for SomeMethod() becomes prohibitive. Developers being developers will slack off and not write them. If they do, the bean counters will get upset because the cost of developing/maintaining the tests increases as the tests are brittle. If there is a bug where is it located? SomeMethod(), the third party "systems", the conduits inbetween? So you spend more time and money tracking down a problem.

This is where Fischer throws the baby out with the bathwater. He has bad design.

I'm surprised the Uncle Bob didn't pick up on this, and instead focused (rightly) on Fischer's points about cost side of not writing Unit Tests, which devolved (wrongly) into a rant about not writing tests at all.

TDD is the way to go (one should flip the bit for that), but Unit Tests are not always beneficial (eg: for Adapters) and can bring little ROI and instead the Integration Tests should be written first, with the Adapter being implemented to pass those tests. Having said that if you're throwing Unit Tests out all together you've got a seriously flawed design.