Over the past 25 years I’ve been capturing data on every project that I’ve worked on and therefore on the majority of the projects I’ve worked on within the past 10 years I have been able to predict to within 5% how many bugs we will uncover in System Integration Testing and/or User Acceptance Testing.
The projects I’ve worked on mainly fall into three categories – retail banking, retail utilities (gas and electricity) and telecommunications. I’ve also worked in government, wealth management, online retailing, government agencies, transport and logistics. Over 90% of these projects have followed some form of waterfall approach and have been multi-million dollar initiatives with hundreds (sometimes thousands) of people involved. The largest (by dollars and personnel) was my most successful in terms of predicting a “Go Live” date – we were 1 week late on a prediction we made 9 months earlier. This doesn’t mean to say we went live with no bugs – far from it; but we did go live with software that was good enough to support the business.
So, how is this possible? In simple terms – by fully understanding the context within which I work. It is absolutely critical for me to understand the overall experience of the project team, the major risks and dependencies associated with delivering the expected outcome(s), the impact and importance of each feature/function being delivered etc. It is also crucial for me to have complete control of the Testing and Implementation schedules.
I am not a fan of making grand statements or promises about what my (Test) team can achieve on a project, but I learnt many years ago that not entering into a conversation/negotiation with senior management types regarding what is achievable and what is not means you’re on a hiding to nothing.
Fortunately (for me) I have a mild form of OCD and therefore I check everything at least three times and recheck them another dozen times and keep records of these checks. This means that I have excellent records of what I’ve worked on, what worked and what didn’t work, why it worked and why it didn’t etc. Therefore, when I join a new project I know what to look for, what to ask, who to ask, who to believe etc etc…. This means that I can build a clear and realistic picture of what is likely to happen, how it’s going to happen and when it’s going to happen. There are very few new problems on IT projects and therefore being prepared saves an enormous amount of time.
I spoke at a workshop in Wellington recently about known unknowns and unknown unknowns and the impact that these can have on projects. There are always unknown unknowns, it’s just a matter of how you deal with them and how you manage their impact – that’s why we conduct ongoing Risk assessments.
How many bugs we might find on any given project is useful, but has little to do with the overall end-game – delivering a successful project. The fact that the majority of Project Managers are focused on this measure and several other (relatively unimportant) numbers means that no matter what I may think (of the value of keeping count) I still have to do it. This doesn’t mean that I manage my part of the project around these numbers either, it just means that I have to allocate some effort to keeping the PM off my back.
Even though I started off by stating that I have profiled the projects I’ve worked on over the past 25 years, I’m not saying that it’s easy to predict outcomes. I have a defined process that continues to work for me. Some may say that I’ve been lucky with my predictions, to which I’d reply “maybe, but I believe you make your own luck and detailed preparation and meticulous attention to detail can definitely shift the odds in your favour”. Being lucky for 10 years isn’t a bad record, but then again I have walked away from projects because I didn’t like the risk profile they presented!!!
The bottom line for me is that if you are prepared to put in the effort and keep meticulous records and become adept at managing risk you too can successfully predict Testing outcomes. Focus on the unknowns and not the knowns, because they are far more likely to prevent you from being successful.
In Part 2 I will expand on the unknown unknowns (sometimes referred to as Black Swans) as these situations have the potential to derail any project.
(Revised) Dateline: Saturday January 11, 2014