Round Earth Test Strategy and earthquakes

Recently, James Bach published a nice analogy/model regarding test strategy, named “Round Earth Test Strategy“(1)

Early in 2018, in January, James Bach was kind enough to share this analogy on a special channel RST( this channel was created especially for RST alumni; it’s an active and very very very useful group to be in – another very good reason to take RST class(2)).

I liked it. But, in that moment, and days/months after, I felt that I still miss some aha moments, about those things, which confirm in a way that a certain subject is internalized in myself somehow.

Museum

In the image of this blog post is the Natural History Museum in London. It was here where I had the aha moments I knew I was missing. Suddenly, in my head, I remembered James Bach’s round earth model. In there is a section named “Volcanoes and Earthquakes”. When I began to read what was written, I was amazed how the things described there can be used as an analogy, as James thought, for testing, but maybe also for problems/risks/bugs/tools/approaches from the IT field:

● “Earthquakes can happen without warning, causing death and destruction on a massive scale. When they strike we feel a sudden, violent shaking of the ground, but they are caused by slowly moving plates on Earth surface. As these plates move, pressure builds up until it finally gives away”(3) → In reading this text I remembered bugs. How certain bugs can create a lot of mess (urgent calls, staying late in the night, missing important moments). How those bugs appear suddenly, without notice. But what stroke me is the fact of slowness and violence of earthquakes.

Slowness in:

– how functionality is written → It’s not like the functionality is developed instantly. There are meetings, mails, writing code, searching through code, testing, etc.

– execution of the program –  managed memory leak, usually, takes time to be observed. But when it reaches a critical point, entire system crashes not only the programs.

The idea of moving plates made me think of integration testing.

● “Preparing for the worst; living with earthquakes: Scientists can’t predict exactly where and when an earthquake will strike but they know roughly which area around the world are at risk. It is vital for the people living in these areas to prepare for what may come and know what to do when it does. Without adequate preparation, earthquakes can cause huge suffering and destruction”(3) → It describes perfectly the role of testers and why they are searching for the worst, why they should think negatively. Testers, like these scientists, should understand that they cannot predict and also can’t be sure that the software is ok (just a black swan among thousands and thousands of white swans was enough to proof that there were not just white swans). Although they can’t predict or prove the correctness of a software program, they will use models to identify possible areas with problems guided by risks (For example, by looking at the source control metadata, weak areas within the source code can be discovered. ). Since we’re talking about people, the risks also have a psychological and sociological dimension. It is sure, problems will occur and maybe we should guide our testing also by the possible suffering we create, for the ones using our software.

● “Impact scale: There are different ways of measuring earthquakes. Unlike Richter scale, which measures magnitude of the shaking, the Mercalli scale measures the amount of damage caused – the loss of life and the damage of buildings. Generally speaking, the higher earthquake magnitude the greater the devastation, especially when it strikes near populated areas. But you also have to factor the in depth of the earthquake, and how well people have prepared. A big earthquake can have a low Mercalli value if it happens deep underground or if buildings have been properly supported”(3) → When I saw measuring, I recalled the nonsense in counting the test cases – which is very susceptible to the reification error and this is very, very dangerous. But we have something which tries to avoid the reification error and is based on events/activities: it is called Session Based Test Management.

But there is more, the fact that a big earthquake can have a low Mercalli, made me think about complexity and the fact that the relation between cause and effect is not linear. Populated areas also indicates complexity, because social systems are inherently complex. This means, for testing, that the approach is more informal (it’s about trying/probing, then make sense of it, then respond/report the possible problems which might occur), not a formal one – when thinking of testing, and more specifically the checking dimension,  maybe here mutation checking makes sense.

There is also another implicit dimension here, which is the place where the earthquake happened. Even if it is a small earthquake but it happens in the ocean, it can generate a tsunami. If I relate this to testing, it makes me think of the different coverage areas like structure, platform, function, operations, data, time, interfaces(9), hazard.(thx Ionut Albescu)

● “Danger after the quake: The  danger doesn’t stop once the ground has stop shaking. Fires, landslides and even liquefaction can all cause damage and loss of life…Scientists and engineers have developed ways to deal with these dangers through defenses, warning systems and building design. But even with the best plans in place some communities can still be caught off guard”(3) → How many developers, testers, scrum masters,… think that maybe a person will be fired because he/she is not working fast enough with our software product? How will a developer sleep at night when his/her code caused, even indirectly a death, or a bankruptcy? There are consequences, but a lot of people don’t get the fact that they must assume also unintended consequences, which were triggered by what their product does. How will they deal with that?

They have developed defences, but it’s interesting that they speak about them with terms like: tools (4), models(5). We, in IT, use a lot the word  “automation”…

When they speak about plans it’s very serious because it’s about people’s lives. They are not using some tools/techniques  as a plan, they are guided by the reality of the situation. What a test plan means, for a lot of people: automation at the unit level, integration and maybe acceptance(BDD). And they add exploratory, although they are not able to articulate what it means and how to show/do it in a professional way → this is not a test plan.

What if aftershock are the equivalent of hotfixes? And we have a flow like this:

1. A bug appears

2.A quick hotfix is made, but in a hurry

3. As a result, that hotfix can cause undesired problems(maybe inadvertently), because of the chaos created by the initial bug.

The last sentence made me think at the japanese word “hansei”. Because even though those scientists built/are building, defense and warning mechanisms(kaizen), the things can still go wrong and this is sadness/regret.(6)

● “After the earthquake, responding to disaster: earthquakes and tsunamis can destroy home and buildings, transforming lives. The hours and days  that follow the disastrous events can be vital for saving anyone who has been trapped as a result….As people come to terms with the destruction they can start with the process of building resilience – changing the way they live and act to deal with the risk of an earthquake in the future. This can leave them better prepared for future earthquakes“(3) → The keyword here is resilience, but a lot of IT people want robustness. We, in IT, have chosen the wrong metaphor. Rapid Software Testing(a Context Driven methodology) is fully aware of that, that’s why it’s so different from “Factory Style” testing(7), because it sees the context as an ecology, not as a factory(8).

Conclusion: Read James Bach’s post, then read the text from the museum again. I hope you will find it as useful as it was for me.


(1) James Bach, “Round Earth Test Strategy”, http://www.satisfice.com/blog/archives/4947

(2) https://rapid-software-testing.com/

(3) London Natural History Museum, Earth Galleries, text used with permission under license Non-Commercial Government Licence, Copyright © The Trustees of the Natural History Museum, London

(4) “Tectonic hazards/Earthquake engineering“, https://en.m.wikiversity.org/wiki/Tectonic_hazards/Earthquake_engineering

(5) “Improving defence against earthquakes and tsunamis”, https://www.ucl.ac.uk/news/2017/mar/improving-defence-against-earthquakes-and-tsunamis

(6) James Coplien, Interview, www.infoq.com – in this interview he explained scrum and these 2 japanese words, among other things – I can’t find the text, as link, but I have it as text, tough, in my personal archive.

(7) James Bach, Michael Bolton, “RST Appendices”, http://www.satisfice.com/rst-appendices.pdf – pages 3-6

(8) Alicia Juarrero, “Safe-Fail, NOT Fail-Safe”, https://vimeo.com/95646156

(9) James Bach, Michael Bolton, SFDIPOT, http://www.satisfice.com/rst.pdf

About best practice

This idea of best practice continue to pop up. For me is disturbing, in the sense that some practices are being introduced just because near them appears the word “best”. And this is bloody dangerous. In the lines below, I try to decompress what I have just said because I realized that even though we speak about 2 words, is important to say what those 2 words might mean and all the thoughts regarding them.

As a premise, I have to say that language is very important because it can trigger different kind of thoughts( I thought is not true, but after reading some of Antonio Damasio work, I am sure now) . For example, I told to a Scrum Master that her role is like an attractor ( attractor idea from Lorenz attractor). I could have used a word which Scrum Masters are accustomed, but I might have failed to communicate what I wanted to say.

So, for me there are 2 things/directions to think about this topic, at least to try to clarify or see if is some coherence. First what some might really mean/think by “best practice” and second, see what exactly really is “best practice”.

So:

1. what some really might mean by “best practice” -> to really see what might they mean, I imagine there can be lot of inference.

Here I identified 2 directions also, there might be others:

1.1 some people want to say something else, but don’t have the words.

For example: We want to use a new JS framework, like Vue/React/Angular. No one knows the particulars. So someone might say: “What are the best practices for developing with Vue/React/Angular?” By saying this, in a way, is trying to see how to deal with uncertainty regarding that framework. For sure the official documentation/video trainings give a way to handle the work with Vue/React/Angular. This means that, by best practice that person might mean some common norm/recipes to tackle that UI framework. If that person will follow the patterns exposed by the documentation of others then he/she will be able to learn/do/search for the work – at least this might be his/her thoughts.

With this example I have noticed some things:

– Is one thing to learn a specific syntax. Then those, introductory, videos/documentation might help;

– Is a totally different story to structure/architect the code and not being biased by the UI framework. I have to say that I do not search for a Vue/React/Angular developer, I search for a craftsperson developer which is a totally different thing(1) – by the way this is becoming a serious problem, especially in outsourcing;

– Framework designers will not ask for my or yours permission to modify or retire a framework;

So, in this case, “best practice” might mean something like finding a way to deal with unknown. But, but, if by solving this unknown thing ( at least having the impression it can be solved) in the end might use the, so called, best practices. These best practices are like universal rules for that specific framework and might be  in the thoughts of a lot of people. And here, at least for programmatic stuff, trouble might begin.

Small conclusion of this point: Best practice, maybe, is used in two ways. First to tackle/understand the unknown in a tacit way and then as universal receipts, but with no context awareness. And maybe the person using it not being aware of this.

1.2 Mechanistic thinking, truly believing/hoping there exists this “best practice” applicable everywhere.

For example, an upper manager who has to handle multiple projects, new and existing, in an outsourcing regime.

This kind of person might look to see best practices in architecture, again as recipes. He/she will want this because once it has the list of the so called architectures, then the  architecture subject will no longer be a problem. He/she can really concentrate on hiring React/Vue/.Net/… developer, not a craftsperson developer.

With testing, he/she will ask the infamous ‘lets automate all’, and a common approach to all projects of what automation to be done, like using Gherkin and that’s it.

This kind of person, thinks that everything is the same. But he/she forgets that “God is in the details”(2).

These kind of persons will come with a checklist, to be sure that some practices, they already have in their head or imposed by others, pardon best practices, will be implemented. What is sad is that those practices are the same for each project, context is not considered at all. I have noticed that some use “good practice” , not “best practice”, but their dynamic of actions is the same, they only changed “best practice” with “good practice”.

Small conclusion of this point: Here “best practice” is much more dangerous than the one described in section 1.1, because it imposes a certain way of doing things no matter what. The dynamic of actions generated by this can and, I think, will generate lot of strange and unwanted things.

2. See what exactly really is “best practice”

Here I’m influenced by the book “Tools of Critical Thinking: Metathoughts for Psychology”, in the sense that is there a chapter which gives examples of a concept being true and wrong in the same moment – I hope I recall ok this.

Also I have to say that I’m influenced here by the work of Alicia Juarrero and Dave Snowden.

I like the idea of seeing context from the point of view of constraint and casualty. For contexts were best practice might be applicable the constraints are governing ones. They are very rigid. And regarding casualty it should imply that it can be identified very clear and without doubts that event B appeared because of event A and nothing else.(3)

So, the context must be very very rigid. But when we deal with human systems, generally speaking, if it will be constraint in such a way, it will crash or find hidden ways to do the work.

Small conclusion to this point: Even if best practice might mean and have a sense, for sure it has a bounded applicability and this is not understood by a lot of people.

Conclusion: When I hear the words “best practice”, for me is like a heuristic for sloppiness/anomally/something to raise the guard, which if is imposed, it might hurt a lot. For sure is a good indication to see the situated present and how analyze/see/make a sense of it .


(1) David Schmitz, “10 Tips for failing badly at Microservices”, https://www.youtube.com/watch?v=X0tjziAQfNQ

(2) James Coplien, “Lean Architecture: for Agile Software Development”, https://www.amazon.com/Lean-Architecture-Agile-Software-Development/dp/0470684208

(3) Dave Snowden and Mary E. Boone, “A Leader’s Framework for Decision Making“, https://hbr.org/2007/11/a-leaders-framework-for-decision-making

Testers, testing, automation, tool(s), continuous integration

Automation for a tester (a good one) should not be guided by the Continuous Integration(CI). I expect that tester to do the job regardless if he/she will use some tools. To do/use whatever ingenious tool( or approach with tool) he/she considers fit, and not to be limited by CI and Continuous Delivery(CD). For that tester, when he/she  thinks about automation (actually “tools” is the right word), he/she should not have in mind only gherkin/ranorex/ui automation….In fact, when the tester will notice that these kind of tools are promoted/enforced he/she should be skeptical. Yes, if the tester can make use of them because it’s applicable, then why not, but they should not be caught in a trap with these tools and see nothing else beyond them.

Regarding CI and “automated testing”(checking is the right word not testing, semantics matter(1)), just to be clear, they are useful but they are not “the path, the way and the truth”.

Let’s do a small exercise: Can you imagine a situation in your product/project where 12 things are related? By 12 I mean persons, requirements, things in source code. Regarding a person, from anthropology, we know that he/she has multiple identities and he/she shifts between them(2). If we have 12 dots, this means the number of possible links is 12(12-1)/2=66. But the number of possible patterns is: 2 at the power of 66=4,700 quadrillion patterns(3). At this level, we no longer speak about deduction, induction, but abduction.  Now think again at CI with the automated checks and number of test cases written and executed…something is not ok, right? Somehow I begin to see the cons of blindly believing in CI/CD.


(1) James Bach, Michael Bolton, “Testing and Checking Refined, https://www.satisfice.com/blog/archives/856

(2) Dave Snowden, “The landscape of management: Creating the context for understanding social complexity”, https://www.researchgate.net/publication/228449006_The_landscape_of_management_Creating_the_context_for_understanding_social_complexity

(3)”The problem of connecting the dots”, https://sensing-ontologies.com/the-problem-of-connecting-the-dots/

About testers owning the requirements or the product

In a way this blog post is related with the previous two( see here and here), regarding the problem of testers being underrated.

It’s related because a solution to the problem of unrated testers, is, for some, to encourage the idea that testers should own the requirements. But there’s more to this, it’s not enough for them to encourage the owning of requirements/product, but also the following two ideas are being enforced:

– to present the idea of “automation testing” as a holy grail of testing. And that this “automation testing” is a byproduct of Continuous Integration or Continuous Delivery(no, I am not against CI or CD, just to be clear);

– testers to be the right hand of Product Owner’s(PO) or a extension of the PO’s role.

So is this ok? I think not, I think there are some flaws in how things are being associated. Flaws which show a lack of understanding regarding what testing is.

The testers should not own the requirements. If they own the requirements they have to be the BA or PO or… . Testers must find relevant information regarding requirements, information that if it is not known at the proper time, will be problematic for the product/project/delivery/team. If the tester wants to own the requirements, then he/she is no longer a tester but a BA/PO/…The tester has that special mindset that is different from all team members, he/she must always be thinking where negative/unwanted/bad things can happen. If he/she owns the requirements then he/she will try to protect them; when actually the tester has to dissect and analyze them from all the angles, to challenge their status quo.

Just think about basic stuff regarding security testing, and the mindset to do it. The tester must use all models/techniques available to knock down/find bad stuff, not to own a thing which will never be complete. Why? Because “We always know more than we can say, and we will always say more than we can write down”(1) -And you want to limit the tester to owning the requirements? I hope not.

A tester has the same importance as a BA, PO, Programmer, PM, Team Lead,UX,etc. The tester must be the bridge between useful information and all members/colleagues/clients,  which need that information.

The testers do not drive the delivery, they are, as James Bach says, “light of the car in the night” – this is the correct metaphor – but this means those lights do not own the car….

I think (I could be wrong), a lot of testers want to be the gatekeeper because of a bad self image, but also of the bad image testing/testers have nowadays. I understand where that bad image is coming from, when their work is reduced to test cases, number of test cases and automation of test cases.


(1) Dave Snowden, “Rendering Knowledge”, http://cognitive-edge.com/blog/rendering-knowledge/

About the idea that testers are underrated – Part 1

In a previous blog post I spoke about me being incapable to defend testing and testers.. What stroke me when I wrote that blog post was a discussion I had with a prospect. This prospect clearly expressed that testers, if they would have been in the project, should have been paid much less than developers with same experience. For example a senior tester should not be paid the same as a senior developer, actually it should be half of the price paid for a senior developer.

What was even harder for me was that my colleagues somehow agreed. In those moments, shortly  after the discussion with the prospect concentrated on other subjects, images of all testers from the firm flashed in front of my eyes. When I ran those images through my mind, there were a few dozen of testers, I realized that they also do not see testing as I see it. For me, testing is Context Driven Testing(via Rapid Software Testing(1), Black Box Software Testing(2)), for them it is more like Factory Style testing(3).

It’s time to speak/act about testing the proper way (and by testing I mean the testing envisioned by Jerry Weinberg, Cem Kaner, James Bach, Michael Bolton… – this line of direction). It’s becoming really embarrassing how such an important discipline is being trivialized in such a ugly/unprofessional/shameful/disgraceful manner in our industry.

The main issue here is not about “underselling”/”underrating”(4), these  things are effects. It’s about under-appreciation of the craft of testing. Seriously, what expectation should that tester have about being appreciated when the tester sees himself/herself as a manual/automation tester or qa automation tester and so on? I saw in interviews that testers are asked about how they write a test case, if they know jira/redmine/…, and if they know waterfall/agile/.. methodologies. Seriously?! And we wonder why testing is badly seen? I saw testers who encourage these kind of interviews( well, despite the fact that they label themselves as testers, for me they are not). The sad part is that managers believe this nonsense and are paying for it and they will be fooled again that “test automation” can solve/replace everything. Actually, if a tester is defined by writing test cases, knowing jira/redmine and knowing some agile/waterfall, no wonder,  all can say that their work can be automated or underrated.

Are testers to blame for this?  For sure they should do more to defend their craft.  It’s time for testers to know what professionalism in testing really is(5).

Also, it’s time for the Agile community to understand that testing, the real one, is very different: http://www.developsense.com/presentations/2017-09-TestingIsTestingAgileIsContext.pdf (careful, it’s a long document – not just a list of platitudes. It’s serious stuff . No fooling around.). After reading this document and thinking of that kind of tester and testing described there, for sure  the words “underrated” and “underselling” will not popup in anyone’s head anymore in regards to testing and testers. They will say, probably, something like: “wow, what I saw till now regarding testing was a bad comedy” or “I lost money in a stupid way”.

I am a developer trying to make a sense of what real testing is. I was glad to see a public feedback about this topic from a Context Driven Tester, Klára Jánová. In part 2 of this post is her feedback regarding the subject discussed here.


(1) James Bach, Michael Bolton, “Rapid Software Testing”, https://rapid-software-testing.com/

(2) Cem Kaner, “Black Box Software Testing course”, http://www.testingeducation.org/BBST/

(3) James Bach, Michael Bolton, “RST Appendices”, http://www.satisfice.com/rst-appendices.pdf (pages 3 to 7)

(4) Claire Goss, “Testers – Is it our own fault we are Underrated?!” http://www.exactest.ie/blog-testers-underrated.html

(5) Robert C. Martin, “Sapient Testing: The “Professionalism” meme”, https://sites.google.com/site/unclebobconsultingllc/home/articles/sapient-testing-the-professionalism-meme

Stories, Story points and Velocity

This topic keeps appearing constantly in my current contexts, but also on LinkedIn. But why?

Below I’ll try to mention some possible reasons why this is happening. Although I’ll mention them separately maybe they are not fully separated – there might be some connection between them. In a way, I would like to emphasize that, in a certain moment and context, one predominates.So:

  1. The desire to learn about these (stories(1), story points(2), velocity(3)) :It might happen that someone might want to learn about these concepts because of her/his own discoveries/investigations/curiosity and trying to make a sense of the reality, which might be contradictory – though I have doubts that there are lot of people asking this without being influenced by the points I will specify bellow.
  2. Dominant thinking: now this is the trend. Try to speak with someone and do a planning without using the notion of story, story points and velocity. For example look at FDD, you can do this without those notions. In FDD, this is articulated in a different way. For Scrum actually these(stories, story points, velocity), as Alistair Cockburn rightly said, are barnacles(3) (look at Scrum Guide, but also Scrum Plop). Sometimes, it seems, is dangerous not to go with the dominant thinking, if applicable.
  3. Trainings : Here I do not speak about the trainings made by the best in the industry (by best I mean Uncle Bob, Jim Coplien, James Bach, Michael Bolton, Alistair Cockburn, Ron Jeffries, Ward Cunningham, Dave Snowden, etc. I could mention more, but you get the idea). I want to emphasize those trainings made by some people which only know some phrases and that’s it. That kind of people, that are not capable to express and teach their students the bounded applicability of those techniques – for them is best practice to follow and that’s it. So: bad trainings;
  4. Target(s) : I think this is the most dangerous reason. Is about the Goodhart’s law: ”Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”(4). I am amazed of how many anomalies are generated by this. So, some people want to learn more, but they do not understand, I think, that those things((stories, story points, velocity)) are used in a perverted/wrong way, or maybe they understand, but they need an ammunition to attack the targets.
  5. Tools(jira, redmine) : There are tools which entice managers to ask for things because they can generate beautiful graphics with them. I would say that, in a way the tool (or bad manager who thinks only linearly) dictates how things gets done, not people. I think these kind of managers equate agility with these tools …, which is sad. Just think about aggressive/subtitle micro management or think about handling a project just by using estimations and nothing else or that desire for predictivity – well these tools offer “support” for this and many other things. So: bad using of tools.

Conclusion: If I would have had to give a short and fast answer, probably I would have said about the possibility of anomalous/pervert/not ok context. I’m well aware that is possible that I’m influenced by certain biases I have.


(1) Ron Jeffries, “Essential XP: Card, Conversation, Confirmation”, https://ronjeffries.com/xprog/articles/expcardconversationconfirmation/

(2) Ron Jeffries, Google group conversation about why story points were invented, https://groups.google.com/forum/#!msg/scrumalliance/ag8W8xtKQs8/4cOpyt8Jgr0J

(3) Martin Fowler, “XpVelocity” , https://martinfowler.com/bliki/XpVelocity.html

(4) Alistair Cockburn,“Core scrum, barnacles, rumors and hearsay, improved version”,  https://www.youtube.com/watch?v=AuUadPoi35M

(5) Dave Snowden, “The Strathern variation”, http://cognitive-edge.com/blog/the-strathern-variation/

How to make people hate the idea of estimations

Once in awhile the topic of estimations pops out in my activity. I said once in awhile because I am thinking to a special kind of way to do estimations or better said how the idea of estimation is put on the scene in some contexts.  This topic I have had it in my mind for a long time, which became obvious again these days.

So:

1. Usually, when you ask people whether certain items can be made in a sprint ,people can say, maybe, also after some calibration, a clear answer like: “Yes, we think is ok for this sprint to do these” or “Yes, it makes sense to have those items for this sprint”. This means that although not explicitly, but implicitly, an estimation is being made.

Notes:

–  Let’s suppose that some unknowns were already clarified because of some spikes/investigations done a priori;

– “calibration” is an important word[1];

But, and maybe I am wrong,  the same people when asked in detail(hours/points in the sprint) they will not feel comfortable with the estimation and with the meeting(s), because of doing this. I think this happens also because humans are messy[2] and is ok.

What is not so ok, I think, is why to ask more specific estimation detail, for a sprint, when already the team said those things can be made in the sprint, and of course with the information they have for now is the maximum of work they can accept as a team?

There are some possible answers I can think of:

– maybe someone wants control in the sprint( maybe the managers of the manager of the project impose on him , or lack of trust, or outsourcing context where client  paying by day involuntary triggers this need or because he/she knows that the setup of the team is not ok(let’s say the competency of team members)).

– I saw how tools like Jira entice some managers to ask these things;

– they do not understand why sprints/iterations were created;

2. Then are those estimations when getting a project and it has to be finished in, let’s say, 6 months. And the estimations “must be ok, they must not be wrong”. So, we have an initial “must be ok” estimation. But then after 4 sprints again a new estimation is done – of course I imagine people joy to do that because the estimation wanted to be done, strangely, must be “ok”…

So that kind of project is being managed in a way by the “must be ok” estimations, if I can say so – just to make me clear: I am speaking about ( and I am influenced by Alicia Juarrero[3] ) using estimations as a placid background, like an equilibrium structure, like an indication of stability( small deviations around equilibrium).

Note: I am assuming that a project is not in an obvious state, it can be also in complex situations.

Conclusion:

Estimations – in a way or the other – we use them. We use them implicitly or explicitly or deduced and in various forms( relative, time, distributions…) and it makes sense as a concept.

I think that the estimations, more often, are being used by managers in a wrong way, for example like a pressure consciously/unconsciously/unknowingly. And this is actually the problem. Maybe this is happening because of the mechanistic way of their thinking or lack of knowledge or … They choose the wrong metaphor, they do not deal with a refinery/factory, but with an ecology.  Also, probably at least what I saw, most of them do not take a look at what psychology, neuroscience have to say about this and adapt their actions.

I do not think is ok to encourage the dichotomy ProEstimate and No Estimate, I think here is a continuum between them.


(1) Adrian Lander, “Linked In discussion”, https://www.linkedin.com/feed/update/urn:li:activity:6426405267491164160

(2) Dave Snowden, “Humans are messy”, http://cognitive-edge.com/blog/humans-are-messy/

(3) Alicia Juarrero, “Safe-Fail, NOT Fail-Safe  ”, https://vimeo.com/95646156

An example of what it means for a deeply experienced person to held an interview for hiring

Yesterday I wrote about why it matters that a deeply experienced/senior person held an interview.

Today is time to write about such kind of person(although to say about him that is senior is too little, he is much much more) . Is about a special and important example of an action, made by him, in context of interviews . Is special and important because he helped people which now are ok because of him.

His name is Flaviu Boldea.

Almost 4 years ago we held lots of interviews. I was sad because I saw good/nice/pleasant people which did not passed the interview. With his calm voice he told me that this does not mean it should stop there. If I see a candidate which is ok, as a person, but is not passing, yet, the evaluation to offer him/her my time to help them prepare. Is important to say that he had done this several times before, so his words were not just some empty words.

I liked that he did not transform this in a process to do in the company. No no, it was about our willing, as simple persons, to help from our free time. It was about taking the responsibility of doing something and not to abdicate after saying no. It was not easy at all, but it was rewarding.

Experience/type of people involved/seniority/professionalism/craftsmanship/… matters.

About an anomaly in interviews and evaluations within companies

Too often I began to see a strange thing regarding people who make interviews for hiring or evaluates other people in companies. I saw that those persons are not actually seniors(deeply experienced) although there are , in the company, also seniors, who can handle this part. As I said, is a thing I experienced – maybe is not generally applicable, but it made me think deeply about this.

Another detail I did not mention is that these evaluations are made after a checklist, sometimes having a Dreyfuss representation.

I am not speaking regarding the practice that, at interviews for hiring, team members can join the interview just to see the type of the person being interviewed.

I try to be careful about dichotomies in the sense that more often  people use dichotomies when dealing with situations when is not the case. So, in our context, I imagine that is possible that at a certain moment a person with not so much experience can hold an interview/evaluation because of a special context( for example he/she is alone and is an urgent need).

Why I  thought that this might be a problem? Because:

  • experience matters. It matters because of the: different types of situations that person had, different types of skills he/she has acquired, the knowledge he/she has gained, the bad experiences he/she lived(all those lessons learned the hard way), familiarity of situation(s),…;
  • is hard for the evaluator to transcend the things. What if that checklist will not be of any direct help, does it mean the candidates will not be evaluated ok?;
  • It will be hard for him/her to see not just the letter of the law, but also the spirit of the law;
  • the language will be different. And now, I remembered Alistair Cockburn work. A Shu level has a different language than the Ha level or the Ri level or the Kokoro level. A Kokoro level having passed through all stages, if I can say so, will be able to understand and recognize the other levels even the Kokoro level itself. This means it will be a problem to decodify/interpret/analyze/acknowledge what the respondent will say;
  • Go to other domains like medicine. Is it normal for an intern to decide a certain level of an experienced surgeon?

Is important to mention that this evaluation can have an important impact on the person being evaluated( maybe is about money, image, dreams, promotion which can be affected in a bad way) or on the company also. Should an evaluation be let in hands of the not experienced? I hope not. I hope to see human judgment prevail, not to see some actions inspired by a checklist used by inexperienced people.(note: checklists are good, but when I see that for each type of problem the answer is to make a checklist then, maybe, something is not ok).

Shallow and risky view of technical debt

I spoke about technical debt in previous posts here, here, here and here .

Once again I have noticed, how the shallow view of (what is nowadays called) technical debt can ignore risky situations. Risky because some people can ignore certain things which might be problematic. I am referring to some managers, but there are also some “shallow” developers who have an apparent good discourse but nothing else. They think that a number (like the one offered by Sonar for example) is enough to provide them comfort when handling/managing/running/reporting on a project, when actually that number should not be interpreted so inclusively.

So, here are some premises:

  1. I said above “nowadays” because I noticed a difference between what technical debt meant initially, in 1992, and what technical debt is today, for a lot of people.

So, the original meaning of the message intended to be transmitted, was that technical debt is about the misalignment between the current code and known requirements[1]  – I think here might also fit in what Jim Coplien said: that maybe it wasn’t the requirements that have changed, but our understanding of the requirements were the ones that changed[2]. Ward Cunningham actually made in 2009 a new video regarding technical debt in which he “reflects on the history, motivation and common misunderstanding of the “debt metaphor” as motivation for refactoring.”[3]

What technical debt means now: messy code.

I think that, maybe, it’s ok for the original message/meaning to be known in the sense that might bring a new nonconscious perspective. The perspective that can ultimately help, by shaking the bases itself, to a better end.

  1. Martin Fowler tries to depict technical debt in an interesting way by using the quadrant[4]. This part is important because it shows us some dimensions of this concept that are not covered in our analysis/actions/approaches and are not so easily summarized into a number.
  1. Static code analysis tools do not give us straight clear information regarding SOLID principles, or  “4 rules of simple design”, or lean architecture[5]. For example:
  • Regarding SOLID: SRP can be deduced by looking at coupling and cyclomatic complexity. DIP can be deduced by looking at cyclomatic complexity. ISP by looking at empty methods, long methods and cyclomatic complexity. LSP…nothing [6].
  • “4 rules of simple design”: “Reveals intention” is about human judgment [6].

“No duplication” it might seem easy, but is not quite like this; I agree that there are good tools to measure this, but it doesn’t mean that all duplication is bad at a certain moment. I have observed that is not about a number, but in fact it’s about several numbers because it depends on the minimum number of block lines considered when making the comparison(try to run duplication analysis by changing the minimum block size).

  • “Lean architecture”: A special context might be built to make lean architecture at its place, but even if we do this, anomalies can / will still be introduced. I do not think it’s that easy to measure via static code analysis tools;
  1. I understand the need for tool(s) and I’m all for using tools – I’ve just recently developed two – because after all we’re homo faber. But besides that, we are also homo narrans, and these are axes among the atlas in what really makes us homo sapiens[7].

Conclusions:

– It is usually said, in my circles, to try/justify a new idea(bad or good) by saying: ”At least now we have something, before we had nothing”. The premises mentioned above are a way to contribute more to that “at least now we have something”, now I would say “we have a little more”;

– The following question often arises, in my circles: “What can we do better?”. The intent of the mentioned premises is the belief that by exposing them we will be in a much better position to expose/treat/manage/act/test/tackle what is technical debt;

– It’s good that we have Sonar, NDepend, … I do not deny their usefulness. But we have to understand their limited applicability;

– I understand that there is a need to show to our customers, if needed and applicable, that we handle/manage/tackle technical debt. Let’s say that it’s like a sale/marketing that we have to do, if I can say so. But at the same time, it should not be mistakenly believed that those numbers will save a project or even indicate reality. That’s why it means nothing to me to hear someone say that “the technical debt is just 1/2/3/…days to work on”;

Small Note: More technologies, especially those based on JavaScript, do not yet have the tools of code analysis as pertinent as possible. Javascript is an OOP language, for example. It’s great that there are tools like JSLint or something to find duplicates, but it does not quite indicate the cruel reality of a situation(I am thinking of design, oop, architecture when I say this).

– I do not think that technical debt can be described by just a simple number, just look at the dimensions exposed by Martin Fowler. So, we might have a management problem which should not be handled only via Sonar or alike. From those dimensions I think we might spot some social things;

– I do not believe it’s enough to establish together with the client the rules for Sonar or alike, only use them and that’s it; we have to go beyond;

– We need seniority and human judgment, a tool cannot offer this. It can help, but it cannot substitute them completely;


[1] Michael “Doc” Norton, “Technical Debt”, https://cleancoders.com/episode/technical-debt-episode-1/show

[2] I am still looking for the reference…

[3] Ward Cunningham, “Debt Metaphor”, https://www.youtube.com/watch?v=pqeJFYwnkjE

[4] Martin Fowler, “Technical Debt Quadrant”, https://martinfowler.com/bliki/TechnicalDebtQuadrant.html

[5] Jim Coplien, “Lean Architecture”, https://www.amazon.com/Lean-Architecture-Agile-Software-Development/dp/0470684208

[6] Michael “Doc” Norton, “Tracking and Managing”, https://cleancoders.com/episode/technical-debt-episode-2/show

[7] Dave Snowden, “Of material objects” , http://cognitive-edge.com/blog/of-material-objects/