I had an interesting exchange with someone on twitter last night - he'd been working on some math, that kind of looked right to him now, but had no way of knowing whether it was 'right' or not.
I teased him that he should have done TDD, but he felt that TDD meant you had to know your API ahead of coding, and his situation was evolving, so that ruled out TDD.
I was arguing that actually TDD is ideal for when you're not quite sure where you're headed - a view point that didn't fly with his experience - so this is an attempt to further explain that sentiment.
Your brain and body will try to resist TDD
A common barrier to adopting TDD (this is what my colleagues and peers come back with over and over) is "I don't know enough about what I'm doing to write the test yet."
My response is that if you don't know enough about what you're doing to write the test yet, you sure as hell don't know enough to write the code yet!
Test Driven Development shifts a ton of that 'wtf am I trying to do?' pain to the front of the process. And that's hard to get used to. It exposes what you don't have clarity about - when you really just want to get on and pretend you do know where you're headed.
So - how can TDD possibly help when you don't have a clear idea of your direction?
TDD means more than one kind of test
I do 3 kinds of tests. End-to-end tests, integration tests and unit tests. Combining all 3 test varieties is the key to driving evolving development. (Which, IMHO, is the only kind there really is.)
I write the end-to-end tests first. An end-to-end test describes
a user story. If you don't know what your user stories are then you need to get those together before you take another step. User stories will be numerous, even in the simplest app, so don't overwhelm yourself - just start with the shortest, simplest user story in your requirements.
User story / end-to-end tests
In a complex app, the user stories are rarely actually end-to-end (startup to shutdown) but they capture a unit of meaningful interaction with the application.
There is only one user story end-to-end test in each test case. Some examples from my current app include:
LibrarySimpleSearchReturnsInvalid
LibrarySimpleSearchProducesNoResults
LibrarySimpleSearchProducesResults
LibraryShowAllShowsAllItems
LibraryAdvancedSearchProducesResultsByType
LibraryAdvancedSearchProducesResultsByExclusion
... you get the idea.
In each case, the test recreates the user experience from the moment of opening the library (which is a searchable, browsable set of resources of different types - jpg, document, video etc) until they receive their search results.
This means that it's a UI driven test. I have the test code enter text, push buttons etc, and I delve into the display list to verify that the correct items / text etc have appeared on screen at the end, and usually this is asynchronous to allow time for transitions.
Integration / functional area tests
These test a component of functionality. For example the search window, or the results viewer.
Unlike unit tests they make use of real concrete instances of the classes needed to fully instantiate and work with the components being tested. If the functional area depends on the framework to wire it together, the framework is instantiated (robotlegs in my case) in order to wire up the component.
In my current app I have an integration tests for the main menu:
NestedMenuTest
This menu has multiple layers of nested items and has to collapse all / expand all / auto-collapse and so on in response to checkbox clicks. My integration tests check that the scrolling behaves itself when the items are being expanded/collapsed.
test_max_scroll_then_collapseAll_resolves
test_mid_scroll_then_expandAll_keeps_top_visible_item_in_place_and_scales_scroller
and so on...
Usually, integration tests are event driven - I kick it all off by manually firing an event. Often, but not always, they require you to use the display list to verify the results.
Unit / API tests
These test that a specific class does what it is supposed to. They test all the public (API) functions of a class, sometimes multiple times if there are errors to be thrown or alternative paths through the class itself.
There is a wisdom that says test all API except for property accessors. I tend to test my property accessors as well, because there is no limit to what I can screw up and it's faster to get them right them at this point than when the error emerges later.
Instead of using concrete instances of their complex dependencies (eg services), my unit tests make use of mocks (using Drew Bourne's Mockolate) to verify that they've acted upon those classes correctly.
If I was testing that a command pulled values from the event that triggered it, did some jiggery pokery with these values and then used the results in calling a method on the appropriate service, I would mock the services to verify that call, rather than try to test against the results of the call.
Here, I've mocked the two services, lessonLoaderService / joinedLessonLoaderService:
public function testLoadsLessonIfRequestEventNotJoinedLesson():void{
var lessonLoadRequestData:ILessonLoadRequestData = new LessonLoadRequestDTO("test","testSwfPath", true, '', false);
var testEvent:LessonDownloadEvent = new LessonDownloadEvent(LessonDownloadEvent.LESSON_DOWNLOAD_READY, lessonLoadRequestData);
instance.event = testEvent;
instance.execute();
verify(instance.lessonLoaderService).method("loadLesson").args(equalTo('testSwfPath'));
verify(instance.lessonLoaderService).method("loadLesson").once();
}
public function testLoadsJoinedIfRequestEventIsJoinedLesson():void{
var lessonLoadRequestData:ILessonLoadRequestData = new LessonLoadRequestDTO("test","testSwfPath", true, '', true);
var testEvent:LessonDownloadEvent = new LessonDownloadEvent(LessonDownloadEvent.LESSON_DOWNLOAD_READY, lessonLoadRequestData);
instance.event = testEvent;
instance.execute();
verify(instance.joinedLessonLoaderService).method("loadLesson").args(equalTo('testSwfPath'));
verify(instance.joinedLessonLoaderService).method("loadLesson").once();
}
Putting it all together
Often, we put applications together from the bottom up. With half an eye on the requirements, we start thinking about interfaces and event types and functional areas. This works, but it can also result in some YAGNI code, as well as throwing code out that seemed like it was going to be relevant until you realised that the requirements weren't complete.
I think there's more sanity in a work flow that runs this way:
1) User story
2) End-to-end test that verifies this user story (or part of it - this can evolve)
3) Integration tests for the functional areas required to fulfil the end-to-end-test
4) Unit tests for the classes required to provide the functional areas
5) Code to pass 4, to pass 3, to pass 2, to verify against 1
... and when you have no fails, then add to 2, add to 3, add to 4, do 5, rinse and repeat etc.
Doing it this way, the consequences are:
- A lot of head scratching early in each cycle (creating end-to-end tests is hard).
- Usually having failing tests in your test suite, until you're done adding a feature/user story.
- Always being able to tell what the hell you were doing when you last stopped ... because your failing tests make that obvious.
- Never writing code that doesn't actually add value to the project by contributing to the implementation of a user story.
- Always working towards a 'shippable' goal, which is good for the client (and your cash flow if you bill against features) and also allows real user feedback to improve the work still to be done.
- Reduced cognitive load for you at a micro level - you fix problems in the code while that part of the code is what you're focussed on.
- Reduced cognitive load for you at a macro level - you don't have to hold the 'where am I going' part in your head, or remember to test manually, because your user story tests have that covered.
I would argue that as a consequence of those last two there's a bigger reward: being able to show up more fully for the rest of your life. A bug, or a concern about whether I've really implemented feature X correctly, impacts on my ability to be present for my family. I'm kind of not-really-there at dinner, because 90% of my brain is background processing code stuff. This still happens with TDD, but it happens a lot less.
So, if you don't find that TDD is improving your code and your process - not just your output but also your enjoyment - then my (cheeky) suggestion is that you've not discovered what it's really about yet.
In my experience, TDD is fun. Chemicals in my brain fun.