1) When you get an unexpected outcome, do you assume it’s a Bug?
2) % Production data (extracts), % manufactured data?
3) Controlled analysis or independent thinking?
4) Software Testing: Science or Art?
5) When you find a Bug do you consider it a positive or a negative?
6) Is your Testing 50% done or 50% outstanding?
7) Do you look forward to discussions with Developers regarding their work?
8) How much Testing is enough?
9) Context-Driven or Factory-fed?
10) Automation speeds up or slows down your Testing initiatives?
11) Vendor, Open-Source or Bespoke tools?
12) Certification or Accreditation?
13) % Prevention, % Cure?
14) Do you get concerned if you don’t find enough Bugs?
15) Should the Testing Team have a say in the release of software?
16) What is an acceptable pass/fail ratio for System Test?
17) Triage: AM or PM?
18) UAT: Should non-professional Testers (i.e. Users) perform Test execution tasks?
19) Testing effort: Onshore or offshore?
20) Confidence or scepticism?
21) Is your Testing completed or finished?
Dateline: Melbourne, Friday November 22, 2013