In the first part Marius expressed his opinion regarding this subject, below I try to articulate my thoughts.
I came across the topic on linkedin, and I couldn’t resist to not comment on it. I personally don’t experience that people from other departments that work with me would underrate me as a tester (or maybe I just haven’t noticed yet), but I see it happening to other testers a lot. I want to discuss two main points that I kept thinking of as I was reading the original article(1) and that might have a lot in common with why that is happening.
I. The „test automation“ fallacy
Most of us testers often use the wrong terminology when we talk about the opportunities for using automation – especially when using it for build confirmation purposes.
In that case the tester often designs and encodes a suite of checks that he runs after each new build, to see if the results of the suite changed since the last time the suite had been run.
In certain situations, this is often an efficient way of how to save time. However, we can’t talk about that as „test automation“. Why not? This goes all the way down to the question „what is a test?“.
There are many definitions of testing. Since I believe testing is best thought of as a skilled social activity, I prefer the definition offered by James Bach and Michael Bolton. They define a test as an „instance of testing“, where testing stands for „the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.“(2)
So how do I myself approach the encoding of an automated check?
1. At first I have to get to know the product I’m testing; I have to learn about the product – for that I use exploration, studying, questioning, modeling, observation, inference, etc. I can’t set up a machine to do it for me.
2. Then I have to make some decisions about the tradeoffs:
a. Which features of the product are crucial to do regression testing on?
b. What specific checks will be valuable to automate for those features?
c. What tools will I use for the automation?
3. Next, I have to design and encode the check.
4. I have to tell the computer what the words „pass“ and „fail“ mean; I have to think of an oracle, which is self verifying – that means, that I will have to think of what is the anticipated result of the check and include it to the code (this is often called an assertion), so that the computer will be able to tell, if the check passed.
But that’s not all yet, is it? If the check fails, it doesn’t automatically have to mean that there’s a bug in the product – and not even that the feature, which I wrote the check for in the product has changed – maybe just the environment is down. Or the tool is badly configured. Or there’s a bug in my automation – so if I see a check fail, I have to go back to it and make sense of what had really happened. Machines by themselves can’t do that either.
The „it’s all just semantics“ argument
You can see that there’s a lot of human interaction happening while creating automation checks. If we talk about this complex activity as „test automation“, it leads to the belief that testing can be automated – especially by the managers who are not skilled in testing and want to save money where possible. Why would they appreciate testers if we project the illusion of replaceability?
Why wouldn’t they think that they can replace humans with machinery if all that testers do is talking about new ways of using „test automation“ or performing „automated testing“?
Testers like Michael Bolton and James Bach came up with the idea of using much more suitable term “checking”(2) years ago, yet most people in the testing industry still cling to the test automation nonsense. They choose to dismiss the difference between checking and testing with the argument that it’s just semantics and it’s not really that important – but it hurts us testers the most.
II. The obsession with bad certifications
A lot of testers fall into the ISTQB/TMap/<whatever else> certification trap: “I’m certified, therefore I’m a professional tester!”. They might get the feeling that since they already accomplished to get a piece of paper that certifies that they passed a bad exam, they don’t have to invest their resources to further education anymore.
The number one problem that I have with these certifications is that they don’t teach testers how to test. They teach testers to memorize a book and then pick 1 from 4 options as an answer on the exam. And you have to get only 65% of the answers right to pass it anyway. 😊 (3)
I’m talking mainly about ISTQB CTFL here; how can a tester with that certification be taken seriously, if it takes what – a day? – to memorize the “right” answers to the questions, which you know beforehand?
If you compare ISTQB CTFL certification to a different engineering certification such as Offensive Security Certified Professional (OSCP) (4) it’s like night and day. I see OSCP as the kind of certification that actually might be useful if you really feel the urge to measure the depth of skill of an individual with a certification; the student goes through 24 hour challenge in an unfamiliar lab environment to do actual tasks related to security and must show off his skills while doing that.
You might say that it’s silly to compare the advanced OSCP to the most basic ISTQB exam – but even the most basic programming certifications are from a large part built on examples of code that the students have to understand to be able to pass the exam, not on memorizing a book about programming theory. But nobody does ANY actual testing in ISTQB’s CTFL – yet people still buy their expensive courses and pay for the exams.
[Sidenote: I met people who took the ISTQB courses (not just the CTFL, but also the advanced level test manager course) and shared with me that all they did there was listening to the instructor reading the syllabus with two or three examples and it was a complete waste of money and time. I don’t understand how they (ISTQB) can charge money for this.
On the other hand, I also met people, who claimed they enjoyed the course only because their instructor was more open-minded and left out bunch of the syllabus nonsense, which he replaced with some useful examples. Good for those people – they managed to got at least some value out of it, but still – would you want to support such organization when they clearly don’t care about your experience with the courses, as long as you buy them?]
Some testers do their exams just because they think there’s no other option than bad factory school courses. That’s not true. It is not „a certification“ in the ISTQB sense of the word, but there are four awesome courses in a series called Black Box Software Testing (BBST) by Cem Kaner and they are as close to teaching real testing as you might currently get.(5)
Conclusion
Of course there’s more to the topic than just these two points. But even when considering just these two, I really can’t blame people outside of the testing space that they treat some of us testers like second class citizens at work. When we are done with taking ridiculous certifications (and then bragging about passing it on social media sites) and we stop talking about testing as the next-to-be-replaced-with-machinery department, then we can stand a chance of not being underrated.
[Thanks to Michael Bolton and James Bach for their helpful reviews.]
(1) Claire Goss, “ Testers – Is it our own fault we are Underrated?!”, http://www.exactest.ie/blog-testers-underrated.html
(2) James Bach, “Testing and Checking Refined“, https://www.satisfice.com/blog/archives/856
(3) “CTFL2018”, https://www.istqb.org/about-as/faqs.html?view=category&id=79
(4) “Offensive Security Certified Professional”, https://www.offensive-security.com/information-security-certifications/oscp-offensive-security-certified-professional/
(5) “About the Black Box Software Testing Courses”, https://www.associationforsoftwaretesting.org/courses/