This is the first post from my series entitled “Deconstructing Testing” – aimed at looking under the covers at many of our accepted paradigms and asking the question: “Is there a better way?”
As is the case with all my posts, this is an experience-driven account and is not based upon the stuff you will find in text books; therefore, you should not be surprised when I say that Evidence (or proof) must be empirical and not circumstantial or hypothetical – after all testing is a type of experiment or trial.
One of the many things that I have noticed over my 20+ years as a software testing professional is the focus on DOING (as opposed to not doing; i.e. planning, preparing, estimating etc.). I have generally put this down to human nature and the fact that the majority of testers prefer to do stuff rather than prepare to do stuff (or write about the stuff they just did). “Documentation sucks” – as many an IT professional has told me on far too many occasions… So how do we get around this? The obvious answer is to hire people who enjoy the documentation stuff. In the 90’s I would hire “technical writers” (if any of you have worked with or know Sally Davis (see my LinkedIn connections) I hired her to do exactly this) if I couldn’t find sufficient testers. Sadly, the role of the Technical Writer has gone the way of the Database Administrator and Systems Programmer – generally into oblivion.
However, there are other options and one of these is to develop a Test Strategy that spells out clearly your approach to the gathering and publishing of Evidence. Sometime during the mid 90’s I developed one of my first strategic tools for Test Management – “The 5 P’s of Testing” to provide a framework for my teams and projects. In fact, if any of you came on one of my early Introduction to Software Testing courses between ’96 and ’98 you will remember them as:
- Planning (provides evidence of what Testing is required)
- Preparing (provides evidence of how and why Testing will be performed)
- Predicting (provides evidence of how much Testing is enough and how many bugs will be found)
- Performing (provides day to day evidence of tests conducted and bugs found/fixed)
- Publishing (provides evidence of the overall outcome/result with a recommendation)
NOTE: If any of you reading this know two of our leading Testing professionals in Australia – Catherine Lockstone or Natasha Norton – they were two of my first students on this course.
The next thing to consider is how much evidence is necessary or optimal. This depends on the solution under test and the risk/quality measures agreed with the Sponsor and/or Project Manager. I’m not going to explain the basics of Risk-Based Testing here, but the same thinking applies to evidence as it does to the actual types of testing and the depth to which one should test. If you are working on a short-term, low-risk outcome then your evidence should be in line with this and not be over-cooked. If you are working on a long-term (multi-phased, multi-year) project with major technical and business complexities your evidence should stand up in a court of law (if required).
Fortunately, for those of you testing software today, there are many technical tools and media formats that simplify capturing evidence and not everything has to appear in a Word document, as it did in days gone by. I am a great believer in providing evidence in the simplest form possible – I’ve even used sticky notes to plan, prepare, predict and publish results, although I did write a formal report to gain sign-off from the client as part of demonstrating that the solution worked for their business.
I know far too many Test Managers who think they have to produce reams of progress reports (sometimes daily) that no-one reads or has the time to digest (I call this cover my arse reporting). One of my golden rules (that applies to all aspects of the Testing Process) is to use the KISS principle (based upon the teachings of William of Occum) – I will be devoting several future Blogs on how to apply the KISS principle to your testing efforts.
So, in summary, before you set out on your myriad of activities that make up your software testing initiative, agree with your key stakeholders how much evidence is appropriate and how this evidence will be captured and distributed. For those of you who are working with the more enlightened Project Managers the PMO will have this sorted as part of their Communications Strategy – now there;s another can of worms I will be opening soon.
I hope that was useful – please let me know what you think…