I've been thinking a lot lately on how implementing an automated testing (specifically, TDD) project affects a company. I think these problems are much less of an issue on projects that start out using TDD from day one but will become increasingly more severe as time goes on and entropy sets in.
There are several issues with implementing TDD as a policy at a company:
Management buy in is crucial. Without it, you cannot hope to achieve the level of change needed to implement TDD. Unfortunately, this can also be the most difficult issue to overcome.
It’s hard to justify putting forth a huge automated testing or refactoring effort without being able to add direct business value at the end of the project. No manager is going to want to stop developing new features altogether while a suitable amount of defect fixing, refactoring, and test creation takes place. A well articulated plan needs to be put forth to management with clear goals on what TDD will achieve before we can hope to move forward with any kind of blessing.
Not to mention that the act of performing TDD by definition requires more of an upfront investment than simply hacking out code. This is a problem because TDD can skew existing metrics for planning time. How would you like it if one of your employees came up to you one day and said “Sorry boss, but everything I’ve been working on for the past six months has been artificially skewing all of your data points related to how long tasks take, which makes them invalid.”
Remember, without management support, your TDD efforts are likely to die quickly.
Now that we have management buy in, how do we go about accurately measuring the success or failure of our testing project? Do we use code coverage or number of tests over time? Should we measure how many defects are being covered with our tests? Do we have code reviews for our tests or trust the programmers to take care of everything?
Code coverage and test creation numbers are pointless if developers are not writing good tests. These metrics are the easiest to gather, but provide the least value.
I think the best measurement of success is to show that the total number of defects per release decrease over time. However, this comes with its own cost of taking a long time to prove that your efforts are paying off down the line.
Whatever metric or combinations of metrics you end up using, remember the end goal release faster and allow the ability to change code with impunity.
This is one issue I don’t think companies adequately prepare for. Developer buy in is a huge issue and it can be extremely difficult to achieve.
Here are some of the most common excuses that developers who do not want to start TDD will come up with.
Less informed developers will view the new testing strategy as a direct attack on their day-to-day productivity and will end up fighting you every step of the way.
I won’t drone on about the benefits of TDD. Other blog posts do this better than I can. The key point here is to expect a certain level of push-back from developers when you start implementing TDD.
Whether we get good developer support or not we still have the added issue of the toll implementing automated testing in a project takes on the entire development community. The sheer amount of effort needed to sufficiently test a poorly written method with a high cyclomatic complexity is mind boggling.
Let’s face it. A developer is not going to want to spend several hours mocking up a test case for a defect that takes 30 minutes to fix.
This is an issue that can likely be mitigated by spreading out the pain across as many developers as possible and alternating work assignments between writing new code and fixing defects.
I think the key to success here is to strive for continual improvement. Nobody started out as an expert on day one.
A great place to start is to read Clean Code by Uncle Bob. This is a great jumping off point and will get you started down the right path of writing clean, testable code.