Faking the reduction of technical debt

I spoke before about multidimensionality of technical debt.

Unfortunately, sometimes, “Reducing Technical Debt” is used as dust in the eyes. It’s not that obvious when we also use tools for “Reducing Technical Debt”(like Sonar, Ndepend,….). I mean who can contradict the efforts in reducing the technical debt when the people doing it also use tools and can show some numbers? If some non-sense warnings are also solved to decrease the numbers…wow.. they are really attacking the technical debt.

Maybe the real test in showing that work is being done  in  reducing the technical debt, is when some important technical problem must be solved. But because of strange forces it doesn’t get solved. I’m speaking about that kind of problem which will help in lots of directions( including financially!!). That kind of problem which is a pain for years and years but it is never touched.

Why won’t this problem be solved? Probably because the current situation is more comfortable for the people that should take care of it. The present situation  is easier, the status quo is known, as well as how to react in it. Perhaps the desire to reduce the technical debt isn’t real in the first place, and the work is done just for show, just to follow the trend. Or maybe because we simply don’t have too many scrum masters/PO’s/managers that actually understand how software is build…..

Some perverse/strange/sick situations arise when dealing with reducing the technical debt, especially when this is imposed. These situations on the surface might seem to be ok but below the surface might be a total mess. A mess that’s hard to detect if you’re not looking at the code, but instead, just analyzing some numbers generated by the tools.

Technical Debt in relation to Business Features/Roadmap

The technical debt subject, especially in relation to business features/roadmap, continues to pop up. Sometimes I have the feeling that certain people (especially managers) lack some understanding of certain dimensions which should probably be considered in regard to this subject. So, below, there are some points to consider when thinking about technical debt concept and also its relationship with business features/roadmap:

  1. Dualism: There is a strong emphasis on a dualistic way of approaching things. In a way it’s understandable, it goes back to cartesian dualism. Dualistic categories in which one side is right and the other, probably wrong. In this case business and technical debt are thought of as being dual. Technical debt, if approached with pragmatism, helps improve the cycle time; but pragmatism requires knowledge. So actually if we speak with Business People/PO/…, they should think in terms of “business features” and “business cycle time.”(1)

Note: I have said knowledge. With technical debt, unfortunately, the trending words are now tool names (like Sonar). The tools are good, but they must be used wisely.

2.”Quality is a statement about some person(s)”(2):

Let’s not forget about the fact that the decisions of PO’s/SM’s/Architects/Developers might appear rational but in fact those decisions are emotional and political. For example:

– a PO might be too afraid to impact the release;

– the work of an architect might be put away or delayed or he/she has to have an argue with someone – which is not in his/her interest;

– A developer may not want stress. It has other priorities/things to do;

All above examples might be understandable or not, it depends on the context. Still “quality is a statement about some person(s)” .

  1. Technical Debt:

I think this is such a misunderstood concept. This word is used often and connected suddenly to tools. But tools only assist, and those tools should be used in a sapient way(3). There is much more to technical debt and it’s a sad thing that I see it  trivialized and bastardized. So when I think of technical debt I think of: prudent deliberate technical debt, reckless deliberate technical debt, reckless inadvertent technical debt, prudent inadvertent technical debt and the history of this concept.(4)(5)(6)

  1. Wrong metaphor regarding IT and code:

We think that when we apply measurements on the code, that it’s the same as in engineering. Well it’s not quite like this. When civil engineers do the measurements and planning, there is a clear way of what will be constructed, no false positives(7). It’s a complicated thing, meaning that there is a clear way between causes and effects. In software it’s not like that. What we have are only fuzzy things. Ok,  we’ll repair all the warnings and errors spotted by the code analysis tools (Eg: FxCop/Code Analysis, Ndepend, Puma, Sonar, …). Then what? Will the software be ok? No, we do not have the same certainty as in engineering. There are lots of false positives and if the approach is not made carefully, holistic view might be lost.

So what’s the metaphor? Maybe it’s biology, ecology, medicine.

Note: Tools are ok. They help and extend human capabilities, but they are not a replacement for human judgment.

How can I show that the technical aspects are ok on the project? When I hear this question, at first glance, I do not have technicality in my head. I try to see the holistic view: social part (by the way architecture has a social component), release time, impact on testing, lost time, features delivered, responding fast enough to changes, capable to deliver innovative things fast enough that not even users know how to articulate or need…

So: Technical Debt or Business Feature/Roadmap? When this question is raised and a decision has to be made, for me it’s a heuristic that things are not considered, not even in the last responsible moment. Things didn’t happen in isolation and this means it’s a good moment to think about risks.(8)(9)

(1) Jim Highsmith, “Features or Quality? Selling Software Excellence to Business Partners”: http://jimhighsmith.com/features-or-quality-selling-software-excellence-to-business-partners/

(2) Jerry Weinberg, “Agile and the definition of quality”: http://secretsofconsulting.blogspot.ro/2012/09/agile-and-definition-of-quality.html?m=1

(3) James Bach, Sapient Processes:  http://www.satisfice.com/blog/archives/99

(4) “Ward Explains Debt Metaphor”: http://wiki.c2.com/?WardExplainsDebtMetaphor

(5) Martin Fowler, “TechnicalDebtQuadrant”: https://martinfowler.com/bliki/TechnicalDebtQuadrant.html

(6) Michael “Doc” Norton, “Technical Debt”: https://cleancoders.com/videos/technical-debt

(7)  Mark Seemann, Human Code:  https://cleancoders.com/episode/humane-code-real-episode-1

(8) Michael “Doc” Norton, a phrase regarding technical debt:  https://www.linkedin.com/feed/update/urn:li:activity:6338750195731353600

(9) Cory Flanigan post on LinkedIn and comment from  Michael “Doc” Norton: https://www.linkedin.com/feed/update/urn:li:activity:6339019436988727296

Code review discussion

Ri: Do you make code reviews?(1)

Shu: Yes. Every time we make a commit we have to ask for two reviews. And reviewers must put a comment.

Ri:You are explaining to me the formal process, which should not be confused with the quality of the actual review. Why are you doing that?

Shu: aaaa, because we must do it. It is mandatory. No commits should be made without code reviews.

Ri: So, for example, you use it for knowledge sharing?

Shu: Yes yes,exactly

Ri: Do you trust the reviews?

Shu: Of course, I can show you no commit was done without them.

Ri: That’s not what I asked you.

Shu: Yes, I trust them

Ri: What level of experience do your team members have?

Shu: hmm.ughhh…confirmed(2-3 years of experience)

Ri: And they do the review between themselves?

Shu: Yes, of course.

Ri: Hmm, how can you trust those reviews and the technical quality of that code when, actually, the persons who made them doesn’t have that much experience in writing code?

(1) Alistair Cockburn, “Shu Ha Ri”: http://alistair.cockburn.us/Shu+Ha+Ri

Inattentional blindness and code reviews

I already spoke about inattentional blindness in the context of testing in a previous post: Inattentional blindness and test scripts. But there are implications also for code reviews.

So, “We asked 24 radiologists to perform a familiar lung-nodule detection task. A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented. Eighty-three percent of the radiologists did not see the gorilla. Eye tracking revealed that the majority of those who missed the gorilla looked directly at its location.”(1)

But how can the effects of inattentional blindness can be reduced?

”A more effective strategy would be for a second radiologist, unfamiliar with the case and the tentative diagnosis, to examine the images and to look for secondary problems that might not have been noticed the first time through.”(2)

So, it does make sense to do a code review because things might have gone unnoticed. But with complex systems the important thing is the granularity. This means that we have to understand who makes the review and see possible implications.

For example, let’s say a novice has written some code and another novice will make a code review. The junior reviewer, possibly, will understand where and why that code was written like this. But is it enough? I would say no. A senior/advanced developer have to review the code because:

– can share experience;

– has deep technical knowledge(oop, architecture, clean code, properly deal with legacy code, TDD, unit tests, algorithms);

– is capable to have more perspectives.

Is this still enough? I would say no. A tester can take a look at the code with the intent to help him/her with the testing.

There are some lessons here:

– a novice can participate to a review because is a good occasion for him/her to learn. It will also offer a good occasion to be aware of the context and give a feedback;

– is not ok when the code review is not made by a senior because slowly slowly things will get in such a bad shape that it will be difficult to repair;

– a senior developer might miss certain details;

– a manager should not feel comfortable just because the process does not allow a commit of code in source control without reviews. Even if that process might say something like this: “a review must be made by at least one senior developer”;

– it is related to testing in the sense that the tester is searching for information which will be used in designing the tests;(3)

– this should not be used by developers as an excuse for a lack of professionalism.

This is praxis(4)….

(1) Trafton Drew, Melissa L.-H. Võ, and Jeremy M. Wolfe, Psychological Science September 2013;24(9):1848-1853, “The Invisible Gorilla Strikes Again:   Sustained Inattentional Blindness   in Expert Observers”: http://search.bwh.harvard.edu/new/pubs/DrewVoWolfe13.pdf

(2) Daniel Simons Christopher Chabris, “The Invisible Gorilla” https://www.amazon.com/gp/aw/d/B009QTTYCQ

(3) James Bach, Rapid Software Testing class discussion [2017]

(4) Dave Snowden,  “Of practical wisdom”:   http://cognitive-edge.com/blog/of-practical-wisdom/

Inattentional blindness and test scripts

“We asked 24 radiologists to perform a familiar lung-nodule detection task. A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented. Eighty-three percent of the radiologists did not see the gorilla. Eye tracking revealed that the majority of those who missed the gorilla looked directly at its location.”(1)

We think we pay attention to things but it’s not like that. We’re aware of only a small part of what we see at any moment. So we might even look at something and not be able to see it – it seems eyes actually look at something but without actually seeing it.

Initially, when I thought about test scripts in conjunction with the concept of inattentional blindness, I imagined it was a way to disprove test scripts entirely. I say this because of the following logical reasons:

– If a test script is being followed with great precision, then for sure other things are overlooked. Therefore, this means the script is not a very useful technique;

– But if the tester says that it will also check other things that are not included in the test script, in a way this behaviour invalidates the test script.

In other words, these two approaches practically invalidate the idea of a test script.

I admit I was biased with test scripts because I saw it (and I still see it) applied in a very very wrong way.

James Bach surprised me by saying that, actually, test scripts are an advanced test technique that should not be done by novices(2).

An advanced tester is aware of his/her limitations and makes sure that the testing is not affected in a bad way when using test scripts.

The reason why a junior tester should not receive test scripts is because, the likely scenario is that the test scripts will not be run as they should be(2). A junior/novice tester should be guided/accompanied by a senior tester.

This is praxis(3)….

(1) Trafton Drew, Melissa L.-H. Võ, and Jeremy M. Wolfe, Psicholigy Science September 2013;24(9):1848-1853, “The Invisible Gorilla Strikes Again:   Sustained Inattentional Blindness   in Expert Observers”: http://search.bwh.harvard.edu/new/pubs/DrewVoWolfe13.pdf

(2) James Bach, Rapid Software Testing class discussion [2017]

(3) Dave Snowden,  “Of practical wisdom”:   http://cognitive-edge.com/blog/of-practical-wisdom/

Is not ok to define a tester as a manual tester

Let’s try an experiment(1):

Scrum Master = Manual Scrum Master

Programmer = Manual Programmer

Project Manager = Manual Project Manager

Product Owner = Manual Product Owner

Team Lead = Manual Team Lead

Technical Lead = Manual Technical Lead

Delivery Manager = Manual Delivery Manager

Security = Manual Security

Marketing = Manual Marketing

Coach = Manual Coach

Trainer = Manual Trainer

Mentor = Manual Mentor

CTO = Manual CTO

CEO = Manual CEO

Is not ok, isn’t it?

Maybe this is happening because it is not known what a tester should really do and what testing, professional testing, actually is.

But is also sad that with all the agility we left the professional testing outside and this is not ok.

I am glad that Robert C.Martin has spoken about this. In a way he begun to heal the divide between agile and professional testing (Context Driven Testing, Rapid Software Testing). Below you can find reproduced an entire blog post written by Robert C. Martin:

“James Bach gave a stirring keynote today at ACCU 2010. He described a vision of testing that our industry sorely needs. Towit: Testing requires sapience.

Testing, according to Bach, is not about assuring conformance to requirements; rather it is about understanding the requirements. Even that’s not quite right. It is not sufficient to simply understand and verify the requirements. A good tester uses the behavior of the system and the descriptions in the requirements, (and face-to-face interaction with the authors of both) to understand the motivation behind the system. Ultimately it is the tester’s job to divine the system that the customerimagined; and then to illuminate those parts of the system that are not consistent with that imagination.

It seems to me that James is attempting to define “professionalism” as it applies to testing. A professional tester does not blindly follow a test plan. A professional tester does not simply write test plans that reflect the stated requirements. Rather a professional tester takes responsibility for interpreting the requirements with intelligence. He tests, not only the system, but also (and more importantly) the assumptions of the programmers, and specifiers.

I like this view. I like it a lot. I like the fact that testers are seeking professionalism in the same way that developer are. I like the fact that testing is becoming a craft, and that people like James are passionate about that craft. There may yet be hope for our industry!

There has been a long standing frission between James’ view of testing and the Agile emphasis on TDD and automated tests. Agilists have been very focussed on creating suites of automated tests, and exposing the insanity (and inanity) of huge manual testing suites. This focus can be (and has been) misinterpreted as an anti-tester bias.

It seems to me that professional testers are completely compatible with agile development. No, that’s wrong. I think professional testers are utterly essential to agile development. I don’t want testers who rotely execute brain-dead manual test plans. I want testers using their brains! I want testers to be partners in the effort to create world-class, high-quality software. As a professional developer I want – I need – professional testers helping me find my blind spots, illuminating the naivete of my assumptions, and partnering with me to satisfy the needs of our customers.”(2)

(1) Michael Bolton, “Manual and automated testing”: http://www.developsense.com/blog/2013/02/manual-and-automated-testing/

(2) Robert C. Martin, “Sapient Testing: The “Professionalism” meme”: https://sites.google.com/site/unclebobconsultingllc/home/articles/sapient-testing-the-professionalism-meme

A fallacy when estimating in Sprint Planning

Usually, in sprint planning comes a moment when developers and testers have to give an estimation. After the tasks seem to be clear, the testers and developers give their corresponding story points. What seems strange to me was the fact that these estimations are calculated and a single number is created.

For me, this approach seems like comparing apples with pears and then generating a number.

I imagine a lot of agilists will say that this is working. I would say: maybe. What gave them the impression it works? Well, I saw that human nature is capable to cope with formal non-sense, so that, then, informally, make it work, one way or another. Why? Well, because, sometimes, a lot of energy is lost in convincing some people of some strange truths. Truths that are not in trend but for the moment might seem easier to cope with. Why? Well, because of laziness, fear, workload, or just because it’s safer….

For me, developers and testers worlds are very different. Yes, these worlds can intersect somehow, but they are still very different like two oceans uniting/meeting each other.

For a developer, when he/she knows what to solve and how to solve it, the task is easy, and the code is written. That code should behave in a deterministic manner. The developer might write unit tests and/or integration tests – from a tester perspective these are automated checks not tests(1). And when I say developer I am thinking of an XP developer, of a developer who should aim at what Uncle Bob says(2).

For a tester things are different. Although things might appear to be clear for the same story, we might deal with non-determinism. When I say tester I am thinking of a tester who is part( consciously or unconsciously) of Context Driven Testing(3) school, for example a Rapid Software tester(4). Testing actually poses a need for a very different skillset because the problems the tester encounters have a different nature, compared to the ones a developer faces. So even if a developer has modified just one line of code, a tester can spend days and days if she/he decides to make a deeper testing, and not just a shallow one. Why? Because the code is, usually, surrounded by a larger code context that might affect the stability of the product. But if is just one line of code, why would it take a lot of time? A small cause can have big effects. For example, does the developer know with great precision all the details, internal technical details of the code? A simple data type problem can cause one of the strangest and ugliest effects and it all starts from just one line of code.

Therefore, I think we should be careful what we do, and we shouldn’t be reluctant to raising questions, even if we might be wrong. I consider this approach of combining numbers that mean different things, a fallacy. For me, this is an oracle(5) to spot the problem and, why not,  I have a name for it: Two oceans integration.

(1) James Bach, “Testing and Checking Refined“: http://www.satisfice.com/blog/archives/856

(2) Robert C. Martin, “The Programmer’s Oath”: http://blog.cleancoder.com/uncle-bob/2015/11/18/TheProgrammersOath.html

(3) Context Driven Testing: http://context-driven-testing.com

(4) Michael Bolton, “What Is A Tester?”: http://www.developsense.com/blog/2015/06/what-is-a-tester/

(5) Michael Bolton, “Oracles from the Inside Out, Part 1: Introduction”: http://www.developsense.com/blog/2015/09/oracles-from-the-inside-out/

A programmer learning testing

I begun to learn about testing because I was blessed to work with some really good testers in my life. Because of that, I wanted to learn more, in order to make my interaction with them more fruitful. If I am a better developer it’s also because of them. I am glad that I have met them first, because afterwards, life showed me also the unprofessional testers. This made it possible for me to be able to compare, analyze, see…

Learning about testing helped me:

  • see through the eyes of the tester;
  • be aware of two worlds that interact with each other, but that shouldn’t be confused;
  • have a better understanding of the professional behaviour that a programmer should have. A programer should test his/her work to the best of their abilities. This way, the tester will have more time to concentrate on rather delicate aspects;
  • respect the tester and his/her job. Sometimes I feel the tester is used to filter the garbage thrown in by a developer doing a lousy job;
  • learn about critical thinking;
  • learn about tacit and explicit knowledge;
  • learn how to approach in a better way the process of creating/building automation checks ( unit tests, integration tests);
  • enter the world of builds and deploys, and aided me to work a lot with them;
  • be careful at semantics;
  • learn about heuristics, oracles;
  • be aware of the work made by James Bach, Michael Bolton, Jerry Weinberg.