Incapable to defend real testing

I am a developer, also a Team Lead. I remember now the moment when I realized that if I’m a better developer is also because of the help of a good testers. Initially it was like a cognitive dissonance: how come a good tester help me be a better developer? Even stranger for me was that when I realised this, I was deep involved (still deep involved) with TDD, unit tests and Automated Acceptance Testing.

There are devs who think that if they do tdd/unit tests everything will be ok and testers will do “some clicks”. Or even worse, the testers, for those devs, should check all the nonsense made by a developer, because that developer will not test. So strange. So a developer should not experiment, learn, explore, investigate, find relevant information from his/her own work.

I am sad. I was unable today to defend the real craft of testing (Rapid Software Testing). In a way, I felt that I was not able to defend those testers who contributed to  make me better professional. So strange feeling.

Then I begun to think more. Now is a lot of talk regarding “manual” testing and “automated” testing.

Note: I told to a marketing person that she is doing “manual” marketing. And this sound so strange almost like offending. I told her about testing and “manual” testing and she understood. Still I can’t forget that it almost sound offending.

Why do they say “manual” testing? Maybe because a tester should just verify a specification, no matter the form of that specification – wait, I am wrong, is a user story with acceptance criteria, of course. Since is a clear specification, it is imagined, I think, that they can also write what they will test – some will say they write the test…. Hmmm…So, if they can write it down this means they don’t need a tester with experience – may be temporary – , a junior should be enough because anyway acceptance criteria are done by the Product Owner. But if a junior can do it, then … wait we can automate those steps, yes, yes via UI. So, hmm, why not get rid also of the junior tester, because we have the automation which is the ultimate goal actually. A click of a button and that’s it. For sure that automation code will be developed not by experienced developers like it would be the production code, no, no we have “automation” testers.

So why a tester should matter so much when actually he/she is only doing some of the shallow testing a developer must do? Shallow and clear steps which can be done by anyone.

So, I imagine that’s why – briefly described – some testers are being called manual and automated.

If testers spend time on shallow bugs they will not have time to see for deeper ones. Such deep that neither a dev can’t see it.

But what do those testers that I was not capable to defend? They, although they might not phrase it like this: investigate, doubts, searching for information, pose questions, questions everything, do more than confirm acceptance criteria, they challenge the actual criteria, they make definition of done/ready relative when all the other thinks are clear and absolute, are able to begin testing without specifications if needed, be able to show the multidimensionality of a problem/specification/story, use any tools which can help them (not just selenium or…), they will not answer to different problems with same answer because they are aware of context, support developers, …. please see here for a much longer list: http://www.developsense.com/blog/2017/04/deeper-testing-2-automating-the-testing/

 

Sorry colleagues testers :(.

Managing technical debt discussion

Shu(1): We found a way to manage technical debt on the projects. I’m curious what you think about it.

Ri: Tell me more. How did this topic arise? Why was your team affected by this?

Shu: Well, it was a top down decision. So we have to manage it somehow. And for doing so we have to see where we are.

Ri: So you don’t know why. The fact that it’s a top down decision doesn’t mean you know the answer to the “why” question. So you are affected because you all have to do it. Hmm…ok. Why do you say “we have to see where we are”?

Shu: By “we have to see where we are” I mean that we need some numbers, like those provided by Sonar.

Ri: Numbers? Don’t you already know how your project stands from a technical point of view?

Shu: Hmm…I don’t understand what you’re trying to say. This is what I was told to do.

Ri: Don’t you participate actively in the day to day work of the project? Aren’t you capable to articulate some aspects?

Shu: Well…yes.

Ri: So why are you acting childishly?

Shu: Childishly?

Ri: Yes. Why do you think complex and complicated stuff can be reduced to numbers? I understand that parents need to make things simple for their kids but you are an adult … you should be able to handle complexity and complicated stuff. (2)

Shu: Why do you think numbers are not ok?

Ri: I didn’t say that. I said that it’s not ok to reduce such an important topic to a simple number.

Shu: Well… then how do you know you are ok from a technical point of view?

Ri: Is technical debt actually a technical problem?

Shu: Ha?

Ri: Ok. Let’s take an example regarding numbers. You have a module 1 which was not modified for years, some of the code is actually generated and it has lots of warnings. What do you do? What do those numbers regarding the module 1 tell you, compared with another module 2 which was modified 5 times in the last 2 sprints? In the end, Sonar tells you that you have 500 days of technical debt. So what will you do about that?

Shu: I will want everything to be cleaned up.

Ri: Why?

Shu: Because clean code matters.

Ri: Don’t deviate. Don’t tell me platitudes. What we are talking about here is how you interpret those numbers and how your actions are guided by them.

Shu: I think that, maybe, we should handle module 2 issues first, and then module 1 issues.

Ri: So what do those 500 days mean?

Shu: I don’t know.

Ri: Why not? It’s a clear number written there….

Shu: It seems that days representing module 2 are more important than days representing module 1.

Ri: Will it always be like this?

Shu: Hmmm….Now that you ask, I am tempted to say: no.

Ri: So, let’s go back to the question: what do those 500 days means?

Shu: I don’t know. At first sight it seemed clear but now it doesn’t anymore.

Ri: Good, you begun to think a little bit. So do you think that by seeing those numbers you can know where you are?

Shu: Nope.

Ri: Why not? At least now you have something. Initially, it seemed, you had nothing.

Shu: Hmmm…actually I did had something….by participating in the work that was done, I saw: the code, automated checks, professionalism of the team members, how prepared the team members are, behaviour of writing code of the team members, bugs, issues, problems, testing. So in a way, tools like Sonar help a little bit by providing a perspective, but to say that they are the key point in managing technical debt is not ok. So tools like Sonar continue the list I mentioned above?

Ri: There is one point I wish you would have mentioned. How do you know, by looking at the code,  what problems it had?

Shu: I have to see the history of the code. A lot of information lies in source control. Information which has to be analyzed and interpreted according to context.

Ri: So technical debt is only a technical problem which can be tracked/simplified by some numbers shown to the upper management..numbers which occasionally are stripped out from context?

Shu: Nope… A lot of discussions have to take place, based on the unfolded details.(2)

Ri: Let me come back to the beginning, and ask: why did top management impose this?

Shu: Hmm. I think I know the answer but I can’t articulate it.

Ri: You just mentioned a long list of dimensions when managing technical debt. Are all people managing the projects aware of those dimensions?

Shu: No.

Ri: What should top management do in this case?

Shu: Impose, so things won’t go in the wrong direction.

Ri: Yes, but why?

Shu: Lack of trust? I believe they thought that having Sonar as a common denominator for all projects was a way to at least control the complexity and mediocracy of the people managing the projects. Mediocrity which in the end affects the dynamics of the project a lot, and which creates a space also for technical problems to flourish.


(1) Alistair Cockburn, “Shu Ha Ri”: http://alistair.cockburn.us/Shu+Ha+Ri

(2) James Bach, No KPIs: Use Discussion: http://www.satisfice.com/blog/archives/1440

Regarding Technical Debt Quadrants

I like the way Martin Fowler(1) tried to classify Technical Debt. I think it was a way to show,in a simple not simplistic way, how to grasp the complex nature of what technical debt actually means and how to handle it. I intentionally said complex not complicated.

So, technical debt might appear easier to grasp ( especially due to code analysis tools) but maybe it’s not like that. It seems that we have to rely on the sapience of developers, and I would say testers(2) as well, to take care of the code.

Below are some short thoughts regarding quadrant elements:

–  reckless vs prudent for inadvertent: I connected it with the training/qualification of developer(s);

– reckless inadvertent: it seems that (some) basic notions of oop/design/architecture/… are missing;

– prudent inadvertent: we might deal with good developer(s) who had an “aha” moment. In this case, the simple fact that they realized something is a good thing, it’s a sign of progress;

– reckless deliberate: it’s a clear intention for “quick and dirty”. This might be ok in a certain moment because this decision is intentional and fully aware of the context. And it is known that it will be taken care of.

But I usually see this “quick and dirty” solutions from so called senior developers who are unprofessional. For example, these unprofessional devs:

  • preach that they want clean code and assure their team members that they want to do it, but behind closed doors they are doing actions( technical work) to impede this;
  • are happy to do some small checks/verifications/investigations which requires no management approval but they ask for time allocation and for the management to agree. And they do this because they know they can sabotage the things so that in the end to do the “quick and dirty” way.

– Prudent deliberate, if applied ok (not by so called seniors mentioned above), might be a sign of holistic thought. It makes me think of the mantra: “First make it work, then make it right, then make it small and fast.”(3)

So, things are not so simple when speaking about technical debt. For those that are interested only in numbers I have a question: How many developers do you know that connect neuroscience, cognitive psychology, complexity theory and forensic psychology with how code is written?


(1) Martin Fowler,   Technical Debt Quadrant: https://martinfowler.com/bliki/TechnicalDebtQuadrant.html

(2) Context Driven Testing:  http://context-driven-testing.com

(3) Robert C. Martin, Test First: http://blog.cleancoder.com/uncle-bob/2013/09/23/Test-first.html

Faking the reduction of technical debt

I spoke before about multidimensionality of technical debt.

Unfortunately, sometimes, “Reducing Technical Debt” is used as dust in the eyes. It’s not that obvious when we also use tools for “Reducing Technical Debt”(like Sonar, Ndepend,….). I mean who can contradict the efforts in reducing the technical debt when the people doing it also use tools and can show some numbers? If some non-sense warnings are also solved to decrease the numbers…wow.. they are really attacking the technical debt.

Maybe the real test in showing that work is being done  in  reducing the technical debt, is when some important technical problem must be solved. But because of strange forces it doesn’t get solved. I’m speaking about that kind of problem which will help in lots of directions( including financially!!). That kind of problem which is a pain for years and years but it is never touched.

Why won’t this problem be solved? Probably because the current situation is more comfortable for the people that should take care of it. The present situation  is easier, the status quo is known, as well as how to react in it. Perhaps the desire to reduce the technical debt isn’t real in the first place, and the work is done just for show, just to follow the trend. Or maybe because we simply don’t have too many scrum masters/PO’s/managers that actually understand how software is build…..

Some perverse/strange/sick situations arise when dealing with reducing the technical debt, especially when this is imposed. These situations on the surface might seem to be ok but below the surface might be a total mess. A mess that’s hard to detect if you’re not looking at the code, but instead, just analyzing some numbers generated by the tools.

Technical Debt in relation to Business Features/Roadmap

The technical debt subject, especially in relation to business features/roadmap, continues to pop up. Sometimes I have the feeling that certain people (especially managers) lack some understanding of certain dimensions which should probably be considered in regard to this subject. So, below, there are some points to consider when thinking about technical debt concept and also its relationship with business features/roadmap:

  1. Dualism: There is a strong emphasis on a dualistic way of approaching things. In a way it’s understandable, it goes back to cartesian dualism. Dualistic categories in which one side is right and the other, probably wrong. In this case business and technical debt are thought of as being dual. Technical debt, if approached with pragmatism, helps improve the cycle time; but pragmatism requires knowledge. So actually if we speak with Business People/PO/…, they should think in terms of “business features” and “business cycle time.”(1)

Note: I have said knowledge. With technical debt, unfortunately, the trending words are now tool names (like Sonar). The tools are good, but they must be used wisely.

2.”Quality is a statement about some person(s)”(2):

Let’s not forget about the fact that the decisions of PO’s/SM’s/Architects/Developers might appear rational but in fact those decisions are emotional and political. For example:

– a PO might be too afraid to impact the release;

– the work of an architect might be put away or delayed or he/she has to have an argue with someone – which is not in his/her interest;

– A developer may not want stress. It has other priorities/things to do;

All above examples might be understandable or not, it depends on the context. Still “quality is a statement about some person(s)” .

  1. Technical Debt:

I think this is such a misunderstood concept. This word is used often and connected suddenly to tools. But tools only assist, and those tools should be used in a sapient way(3). There is much more to technical debt and it’s a sad thing that I see it  trivialized and bastardized. So when I think of technical debt I think of: prudent deliberate technical debt, reckless deliberate technical debt, reckless inadvertent technical debt, prudent inadvertent technical debt and the history of this concept.(4)(5)(6)

  1. Wrong metaphor regarding IT and code:

We think that when we apply measurements on the code, that it’s the same as in engineering. Well it’s not quite like this. When civil engineers do the measurements and planning, there is a clear way of what will be constructed, no false positives(7). It’s a complicated thing, meaning that there is a clear way between causes and effects. In software it’s not like that. What we have are only fuzzy things. Ok,  we’ll repair all the warnings and errors spotted by the code analysis tools (Eg: FxCop/Code Analysis, Ndepend, Puma, Sonar, …). Then what? Will the software be ok? No, we do not have the same certainty as in engineering. There are lots of false positives and if the approach is not made carefully, holistic view might be lost.

So what’s the metaphor? Maybe it’s biology, ecology, medicine.

Note: Tools are ok. They help and extend human capabilities, but they are not a replacement for human judgment.

How can I show that the technical aspects are ok on the project? When I hear this question, at first glance, I do not have technicality in my head. I try to see the holistic view: social part (by the way architecture has a social component), release time, impact on testing, lost time, features delivered, responding fast enough to changes, capable to deliver innovative things fast enough that not even users know how to articulate or need…

So: Technical Debt or Business Feature/Roadmap? When this question is raised and a decision has to be made, for me it’s a heuristic that things are not considered, not even in the last responsible moment. Things didn’t happen in isolation and this means it’s a good moment to think about risks.(8)(9)


(1) Jim Highsmith, “Features or Quality? Selling Software Excellence to Business Partners”: http://jimhighsmith.com/features-or-quality-selling-software-excellence-to-business-partners/

(2) Jerry Weinberg, “Agile and the definition of quality”: http://secretsofconsulting.blogspot.ro/2012/09/agile-and-definition-of-quality.html?m=1

(3) James Bach, Sapient Processes:  http://www.satisfice.com/blog/archives/99

(4) “Ward Explains Debt Metaphor”: http://wiki.c2.com/?WardExplainsDebtMetaphor

(5) Martin Fowler, “TechnicalDebtQuadrant”: https://martinfowler.com/bliki/TechnicalDebtQuadrant.html

(6) Michael “Doc” Norton, “Technical Debt”: https://cleancoders.com/videos/technical-debt

(7)  Mark Seemann, Human Code:  https://cleancoders.com/episode/humane-code-real-episode-1

(8) Michael “Doc” Norton, a phrase regarding technical debt:  https://www.linkedin.com/feed/update/urn:li:activity:6338750195731353600

(9) Cory Flanigan post on LinkedIn and comment from  Michael “Doc” Norton: https://www.linkedin.com/feed/update/urn:li:activity:6339019436988727296

Code review discussion

Ri: Do you make code reviews?(1)

Shu: Yes. Every time we make a commit we have to ask for two reviews. And reviewers must put a comment.

Ri:You are explaining to me the formal process, which should not be confused with the quality of the actual review. Why are you doing that?

Shu: aaaa, because we must do it. It is mandatory. No commits should be made without code reviews.

Ri: So, for example, you use it for knowledge sharing?

Shu: Yes yes,exactly

Ri: Do you trust the reviews?

Shu: Of course, I can show you no commit was done without them.

Ri: That’s not what I asked you.

Shu: Yes, I trust them

Ri: What level of experience do your team members have?

Shu: hmm.ughhh…confirmed(2-3 years of experience)

Ri: And they do the review between themselves?

Shu: Yes, of course.

Ri: Hmm, how can you trust those reviews and the technical quality of that code when, actually, the persons who made them doesn’t have that much experience in writing code?


(1) Alistair Cockburn, “Shu Ha Ri”: http://alistair.cockburn.us/Shu+Ha+Ri

Inattentional blindness and code reviews

I already spoke about inattentional blindness in the context of testing in a previous post: Inattentional blindness and test scripts. But there are implications also for code reviews.

So, “We asked 24 radiologists to perform a familiar lung-nodule detection task. A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented. Eighty-three percent of the radiologists did not see the gorilla. Eye tracking revealed that the majority of those who missed the gorilla looked directly at its location.”(1)

But how can the effects of inattentional blindness can be reduced?

”A more effective strategy would be for a second radiologist, unfamiliar with the case and the tentative diagnosis, to examine the images and to look for secondary problems that might not have been noticed the first time through.”(2)

So, it does make sense to do a code review because things might have gone unnoticed. But with complex systems the important thing is the granularity. This means that we have to understand who makes the review and see possible implications.

For example, let’s say a novice has written some code and another novice will make a code review. The junior reviewer, possibly, will understand where and why that code was written like this. But is it enough? I would say no. A senior/advanced developer have to review the code because:

– can share experience;

– has deep technical knowledge(oop, architecture, clean code, properly deal with legacy code, TDD, unit tests, algorithms);

– is capable to have more perspectives.

Is this still enough? I would say no. A tester can take a look at the code with the intent to help him/her with the testing.

There are some lessons here:

– a novice can participate to a review because is a good occasion for him/her to learn. It will also offer a good occasion to be aware of the context and give a feedback;

– is not ok when the code review is not made by a senior because slowly slowly things will get in such a bad shape that it will be difficult to repair;

– a senior developer might miss certain details;

– a manager should not feel comfortable just because the process does not allow a commit of code in source control without reviews. Even if that process might say something like this: “a review must be made by at least one senior developer”;

– it is related to testing in the sense that the tester is searching for information which will be used in designing the tests;(3)

– this should not be used by developers as an excuse for a lack of professionalism.

This is praxis(4)….


(1) Trafton Drew, Melissa L.-H. Võ, and Jeremy M. Wolfe, Psychological Science September 2013;24(9):1848-1853, “The Invisible Gorilla Strikes Again:   Sustained Inattentional Blindness   in Expert Observers”: http://search.bwh.harvard.edu/new/pubs/DrewVoWolfe13.pdf

(2) Daniel Simons Christopher Chabris, “The Invisible Gorilla” https://www.amazon.com/gp/aw/d/B009QTTYCQ

(3) James Bach, Rapid Software Testing class discussion [2017]

(4) Dave Snowden,  “Of practical wisdom”:   http://cognitive-edge.com/blog/of-practical-wisdom/

Inattentional blindness and test scripts

“We asked 24 radiologists to perform a familiar lung-nodule detection task. A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented. Eighty-three percent of the radiologists did not see the gorilla. Eye tracking revealed that the majority of those who missed the gorilla looked directly at its location.”(1)

We think we pay attention to things but it’s not like that. We’re aware of only a small part of what we see at any moment. So we might even look at something and not be able to see it – it seems eyes actually look at something but without actually seeing it.

Initially, when I thought about test scripts in conjunction with the concept of inattentional blindness, I imagined it was a way to disprove test scripts entirely. I say this because of the following logical reasons:

– If a test script is being followed with great precision, then for sure other things are overlooked. Therefore, this means the script is not a very useful technique;

– But if the tester says that it will also check other things that are not included in the test script, in a way this behaviour invalidates the test script.

In other words, these two approaches practically invalidate the idea of a test script.

I admit I was biased with test scripts because I saw it (and I still see it) applied in a very very wrong way.

James Bach surprised me by saying that, actually, test scripts are an advanced test technique that should not be done by novices(2).

An advanced tester is aware of his/her limitations and makes sure that the testing is not affected in a bad way when using test scripts.

The reason why a junior tester should not receive test scripts is because, the likely scenario is that the test scripts will not be run as they should be(2). A junior/novice tester should be guided/accompanied by a senior tester.

This is praxis(3)….


(1) Trafton Drew, Melissa L.-H. Võ, and Jeremy M. Wolfe, Psicholigy Science September 2013;24(9):1848-1853, “The Invisible Gorilla Strikes Again:   Sustained Inattentional Blindness   in Expert Observers”: http://search.bwh.harvard.edu/new/pubs/DrewVoWolfe13.pdf

(2) James Bach, Rapid Software Testing class discussion [2017]

(3) Dave Snowden,  “Of practical wisdom”:   http://cognitive-edge.com/blog/of-practical-wisdom/

Is not ok to define a tester as a manual tester

Let’s try an experiment(1):

Scrum Master = Manual Scrum Master

Programmer = Manual Programmer

Project Manager = Manual Project Manager

Product Owner = Manual Product Owner

Team Lead = Manual Team Lead

Technical Lead = Manual Technical Lead

Delivery Manager = Manual Delivery Manager

Security = Manual Security

Marketing = Manual Marketing

Coach = Manual Coach

Trainer = Manual Trainer

Mentor = Manual Mentor

CTO = Manual CTO

CEO = Manual CEO

Is not ok, isn’t it?

Maybe this is happening because it is not known what a tester should really do and what testing, professional testing, actually is.

But is also sad that with all the agility we left the professional testing outside and this is not ok.

I am glad that Robert C.Martin has spoken about this. In a way he begun to heal the divide between agile and professional testing (Context Driven Testing, Rapid Software Testing). Below you can find reproduced an entire blog post written by Robert C. Martin:

“James Bach gave a stirring keynote today at ACCU 2010. He described a vision of testing that our industry sorely needs. Towit: Testing requires sapience.

Testing, according to Bach, is not about assuring conformance to requirements; rather it is about understanding the requirements. Even that’s not quite right. It is not sufficient to simply understand and verify the requirements. A good tester uses the behavior of the system and the descriptions in the requirements, (and face-to-face interaction with the authors of both) to understand the motivation behind the system. Ultimately it is the tester’s job to divine the system that the customerimagined; and then to illuminate those parts of the system that are not consistent with that imagination.

It seems to me that James is attempting to define “professionalism” as it applies to testing. A professional tester does not blindly follow a test plan. A professional tester does not simply write test plans that reflect the stated requirements. Rather a professional tester takes responsibility for interpreting the requirements with intelligence. He tests, not only the system, but also (and more importantly) the assumptions of the programmers, and specifiers.

I like this view. I like it a lot. I like the fact that testers are seeking professionalism in the same way that developer are. I like the fact that testing is becoming a craft, and that people like James are passionate about that craft. There may yet be hope for our industry!

There has been a long standing frission between James’ view of testing and the Agile emphasis on TDD and automated tests. Agilists have been very focussed on creating suites of automated tests, and exposing the insanity (and inanity) of huge manual testing suites. This focus can be (and has been) misinterpreted as an anti-tester bias.

It seems to me that professional testers are completely compatible with agile development. No, that’s wrong. I think professional testers are utterly essential to agile development. I don’t want testers who rotely execute brain-dead manual test plans. I want testers using their brains! I want testers to be partners in the effort to create world-class, high-quality software. As a professional developer I want – I need – professional testers helping me find my blind spots, illuminating the naivete of my assumptions, and partnering with me to satisfy the needs of our customers.”(2)


(1) Michael Bolton, “Manual and automated testing”: http://www.developsense.com/blog/2013/02/manual-and-automated-testing/

(2) Robert C. Martin, “Sapient Testing: The “Professionalism” meme”: https://sites.google.com/site/unclebobconsultingllc/home/articles/sapient-testing-the-professionalism-meme