Last week we had our latest Test Engineering Alliance Meetup in Melbourne and the entire evening was devoted to the topic of my last A to Z of Software Testing Blog – Zero Defects. What became obvious from the outset (and this is usually the case when a bunch of intelligent people have a debate) was that agreeing on the definition of a Defect was a challenge in itself. This is not the first time I’ve discussed the definition of a software defect (and it won’t be the last) but I always enjoy the thrust and parry of the encounter. In fact it reminded me that whenever I’ve been responsible for setting the Test Strategy for an organisation (or project) I have confirmed various important definitions by way of scrutinising the incumbent terminology for all things Testing. The reason for doing this is that people move on and ideas change and agreeing what we feel works for us today is always a good place to start. It helps to understand what is seen as Black and White and what is Grey.
Every organisation has it’s own context. Every organisation has it’s own set of beliefs and ethics. Every organisation has different people and therefore differing opinions. So why should we expect to adopt someone else’s ideas/definitions/terminology etc. There’s no harm in referencing well-known definitions (some may even be borrowed from or based upon industry standards) but we should always be prepared to challenge the status quo when it comes to terminology. Some organisations have very technical users and business representatives and therefore the common terminology can be skewed that way, while other organisations may require a far more business-focused terminology.
Getting back to our Meetup last week, we were debating the concept of Zero Defects, but the term defect was being readily interchanged with Bug/Problem/Issue etc. So, it’s not surprising that we confuse our partners and stakeholders.
At some point, we also expanded our conversation into the value of reporting defect numbers. This is quite a hot topic at the moment and one that I have not written about for some time. My current thinking is that numbers of defects (found/open/closed etc.) does help us with our contextual discussions. However, it can also introduce bias and misdirected focus.
It’s been interesting over the years how we, as an industry, have oscillated on the subject of metrics. The bean counters want something they can measure, and as they deal in dollars and cents, they want some sort of correlation to what they are getting for their money. Unfortunately, this is a very simplistic approach and we need to steer these people towards value – not just money per se.
Why not use the GFC sub-prime fiasco as an example? Some requirements will be of high value, some will be high risk and some will be founded on poor judgement and dodgy thinking. At the outset of a project there is usually little that is Black and White – except (hopefully) an agreed outcome. If we have an outcome consensus we have a better than average chance of getting Requirements that approach clarity, ie Black and White. If we fail at this very early stage (to have clearly defined Requirements) the likelihood of a successful project is greatly diminished. The knock on effect for analysis, coding, testing and implementation is tragic and overruns are almost guaranteed.
A far better approach is to categorise Requirements by importance (to those that matter), risk (best/worst case scenarios), quality (how good can/must the outcome be), life expectancy (short term / long term stability) and all the non-functional aspects that we usually have to explain to the uninitiated. The kind of categorisation comes with the added bonus of discussion among all the major stakeholders on the project and leads to less Grey and more Black and White. Therefore if we follow this approach throughout the entire SDLC lifecycle (no matter which one we choose), the perceived need to report numbers (rather than goals and achievements) can be dealt with more easily.
Dateline: Melbourne, Monday March 21, 2016