B & W is for Black and White (My A to Z of Software Testing, Part 3)

Last week we had our latest Test Engineering Alliance Meetup in Melbourne and the entire evening was devoted to the topic of my last A to Z of Software Testing Blog – Zero Defects. What became obvious from the outset (and this is usually the case when a bunch of intelligent people have a debate) was that agreeing on the definition of a Defect was a challenge in itself. This is not the first time I’ve discussed the definition of a software defect (and it won’t be the last) but I always enjoy the thrust and parry of the encounter. In fact it reminded me that whenever I’ve been responsible for setting the Test Strategy for an organisation (or project) I have confirmed various important definitions by way of scrutinising the incumbent terminology for all things Testing. The reason for doing this is that people move on and ideas change and agreeing what we feel works for us today is always a good place to start. It helps to understand what is seen as Black and White and what is Grey.

Every organisation has it’s own context. Every organisation has it’s own set of beliefs and ethics. Every organisation has different people and therefore differing opinions. So why should we expect to adopt someone else’s ideas/definitions/terminology etc. There’s no harm in referencing well-known definitions (some may even be borrowed from or based upon industry standards) but we should always be prepared to challenge the status quo when it comes to terminology. Some organisations have very technical users and business representatives and therefore the common terminology can be skewed that way, while other organisations may require a far more business-focused terminology.

Getting back to our Meetup last week, we were debating the concept of Zero Defects, but the term defect was being readily interchanged with Bug/Problem/Issue etc. So, it’s not surprising that we confuse our partners and stakeholders.

At some point, we also expanded our conversation into the value of reporting defect numbers. This is quite a hot topic at the moment and one that I have not written about for some time. My current thinking is that numbers of defects (found/open/closed etc.) does help us with our contextual discussions. However, it can also introduce bias and misdirected focus.

It’s been interesting over the years how we, as an industry, have oscillated on the subject of metrics. The bean counters want something they can measure, and as they deal in dollars and cents, they want some sort of correlation to what they are getting for their money. Unfortunately, this is a very simplistic approach and we need to steer these people towards value – not just money per se.

Why not use the GFC sub-prime fiasco as an example? Some requirements will be of high value, some will be high risk and some will be founded on poor judgement and dodgy thinking. At the outset of a project there is usually little that is Black and White – except (hopefully) an agreed outcome. If we have an outcome consensus we have a better than average chance of getting Requirements that approach clarity, ie Black and White. If we fail at this very early stage (to have clearly defined Requirements) the likelihood of a successful project is greatly diminished. The knock on effect for analysis, coding, testing and implementation is tragic and overruns are almost guaranteed.

A far better approach is to categorise Requirements by importance (to those that matter), risk (best/worst case scenarios), quality (how good can/must the outcome be), life expectancy (short term / long term stability) and all the non-functional aspects that we usually have to explain to the uninitiated. The kind of categorisation comes with the added bonus of discussion among all the major stakeholders on the project and leads to less Grey and more Black and White. Therefore if we follow this approach throughout the entire SDLC lifecycle (no matter which one we choose), the perceived need to report numbers (rather than goals and achievements) can be dealt with more easily.

Dateline: Melbourne, Monday March 21, 2016

Advertisements

2 thoughts on “B & W is for Black and White (My A to Z of Software Testing, Part 3)

  1. I’ll repost Mike’s blogpost from today. http://testsheepnz.blogspot.co.nz/2016/03/software-testing-when-were-preaching-to.html

    In my response to him I mention that the actual demand for metrics shows that the project has some serious issues. Metrics are proof of a process that ” a lack of leadership, trust and knowledge”. Processes might be rigid and ignore context, a lack of trust in the hierarchy, political agenda, plain greenhorn antics and much much more.

    I have seen projects that have metrics coming out their ears. Defects, DRRs,… the works. One thing they all didn’t have was a realistic assessment of what was going on. They were rife with discussions about contracts, delivery scope, DRRs – of course,… None of which moved the project forward one bit or actually added any value to the end user. All those projects also had frightfully demotivated (and sarcastic) employees.

    On the othat hand I have been on agile projects with little to no reporting, where reporting wasmostly in prose/email format. Where there were technical metrics for informing SMEs only. Generally happy employees and definitely a product that was actually shippable and useful. Don’t get me wrong, they still had their challenges but at least they were real issues and not some made up stuf to get some DRR to go up.

    One of the things I always wish for on those (mostly big) metrics heavy projects Is that management would at least once use the actual system they are trying so desperately to deliver. If they’d do that they’d immediately realise they are flogging a dead horse. If you have a stat saying there is 392 defects open (not that that means anything), defects are really not the issue you have. Neither is testing an issue. This just proves you’re so far off the track that even your GPS has given up.

    But no, most people I see in IT still stoically think metrics in testing and elsewhere are good.

    By the way I wonder why it is OK not to coult LOCs anymore. Somehow developers have succeeded in laying that one to rest (and many other coding metrics). Would be interesting to find out how.

We're here to help

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s