Monday, 15 December 2014

(Kai)Zen and the art of doing (almost) nothing

Have you ever attended a causal analysis meeting and found that just about anyone were offering random solutions? I was attending one such meeting in which some one just walked in, looked at the problem statement on the board for a moment and started proposing solutions non-stop.
It turns out that we are hardwired to respond than to listen. You can figure it out by playing the game "123 go" mentioned in https://leanpub.com/CollaborationGamesToolbox. (A very good collection of games related to agile mindset :)
A random response is still harmless. The trouble starts when each response is stated as a solution. No wonder more than half of today's problems are because of yesterday's solutions.
Lean Software methods attempt to break this vicious cycle with Kaizen. "Good change" can be a rough translation of this Japanese word. Kaizen is best understood as a practice which is based on two principles:
1. The principle of context
2. The principle of minimalism
Let us understand these a little more.
Lean thinking has employed many practices based on the principle of context. Value Stream Map (VSM), a visual representation of a flow-to-be-optimized helps to set the context. So does a Kanban board by visualizing the dynamics of a project or a Gemba check; where you *take a walk* to the source of the problem. If context is not understood fully, your best response in a situation would be to shut up.
Archimedes is claimed to have said "Give me a lever long enough and I shall move the world". The unsaid wisdom of this is in the pivot (fulcrum for the physicists among you). The principle of minimalism focuses on the pivot, the point of maximum advantage. In an organic system like a software project, it is best discovered by using shorter feedback loops. Implement something small (your top-1 response to the problem context) and visualize how the project responds to it over a sprint.
Take a moment to think again about the agile principle on Simplicity; the art of maximizing the work NOT done.
PS: Let us observe a minute of silence and add one less problem for tomorrow. Amen.

Friday, 12 September 2014

Five finger caps

There is a children's story about a man approaching a tailor with a meter of cloth to make a cap. He decides to push it a bit and asks whether the tailor can make two caps out of it. The tailor replies there is only so much cloth to just about do it. The man decides to push further for five caps and then ten caps. Each time he convinces the tailor to oblige.

What happens at delivery time is any body's guess; Five Finger caps!

Some projects are also pushed for delivery like this.
Three weeks?
May be.
Two weeks?
There goes my weekend.
One week?
Who are you kidding!

When releases are preferred in shorter and shorter time spans, especially in some rather dynamic project scenarios like the Web Development, how to oblige?

The key to this, I believe, lies in the twin principles of pre-emption and collaboration. Risks and pain points should be pre-empted (requirement changes, Quality Assurance, deployment) through collaborative practices (Workshops with customer, collecting feedback using models, Testing early and often, DevOps).

The activities could probably flow in the following sequence.
1. Collect, Discuss, Plan and Discuss again ...
Customer, product owner and some technical representatives from the project team can discuss the "wish list" for the weekly version, elaborate each item and qualify the behavior with a User Acceptance Scenario, do some batch sizing (S,M,L,XL) for each and plan the sprint backlog together.

2. Analysis and Modelling
This has two activities.
1) The user scenarios are modeled (using a Low Fi  prototype) and customer feedback is sought.
2) The development team has a workshop to discuss the data models and take Reuse decisions

3. Development
Coding and testing are done. If test is a separate functional team, it is recommended that they engage early and test together with developers to improve the quality before the formal hand-off point to test.

4. Deployment
Pre-empt failures using a Continuous Delivery system.

Project teams can consider using Shared Incentives instead of conventional Software Project Metrics. Some of these Shared Incentives can be Customer Feedback score, (lack of) post release defects, (lack of) delays etc.

Tuesday, 12 August 2014

Testing in troubled waters

System testers are like sleuths.

It thrills me when a tester talks about the curious incident of the test case in the night time (inspired by Sherlock Holmes' "The Silver Blaze") or the Second Crash ("The adventure of the Second Stain").

Good system testers have a keen nose for potential defects. It pays to create and sustain a team of such good testers in every project.

System testing in agile projects may better be based on the twin principles of preemption and collaboration. In most common flavors of Agile, system testing is also time boxed. The testers challenge is to provide the best guarantee of Quality in this time. In this premise, one can draw parallels between the requirements and motivation of Agile system testing and the emerging concept of Rapid Test.

Preemption and collaboration is achieved in many projects by an informal hand off from the development often referred to as the pre-test or early test increment. Testers may use this to get a feel about it. Testers may also give some feedback to developers informally. An effective practice for this from Rapid Test is called "Mention in passing".

It is suggested that a formal hand off point is also there in every sprint where the development transfers to test. Typically, you may reserve 10~30% of the time in each sprint for this formal test. Additionally, you can plan one in every three sprints or so as a Zero Feature sprint where the teams can pay off some technical debts. The system testers can catch up on automation and some exploratory testing in this time.

System Testing Defects should not be used as the measure of testing efficiency or development quality. It is better to use some shared incentives between the two teams because when you collaborate, you need share the pain and the gain. These shared incentives can be the number of customer defects (or the absence of it), timeliness of the software increments and possibly some measurement based on your value throughput.

Happy Testing! 

Monday, 4 August 2014

Tinkering and soccer mom management

I am greatly influenced by Nassim Taleb and his writing on Anti fragility. The skills of a software craftsman are like our muscles. Use them or lose them.

Tinkering is the practice of development in agile projects where the developers design and code a part of the problem (which the product is trying to solve), do some reflection (testing or staring at the code, whatever works for you) and consider some refactoring.

To begin tinkering, the developer may not need any further preparation than some collaborative workshop about the architecture and the top listed backlog items. The design will evolve as you develop.

It is suggested that each step while tinkering may give the developer an opportunity of deep learning. Compiling and dealing with the tricky and cryptic warnings, testing by exploring and the thrill of the first crash, building in target and so on. All of these can be the small mistakes that would build the resilience in our craft.

I suggest that we could also look out for the soccer mom patterns here. Soccer moms spoon feed their kids to win always. This may limit the kids ability to fail and learn when necessary. In projects also, any practice that tries to avoid all small mistakes may only facilitate bigger and costlier ones.

I think pairing in agile teams is like teaching to fish. The expert can touch base periodically, guide for the next step and then step back. It is OK for the learners to try and fail.

All agile team members should at least be empowered with the courage to fail. 

Thursday, 31 July 2014

Tell your tale

Why are the stories called so?
I think it is because they are meant to be told.
We lost this essence somewhere when we designed the first template for a story. We forgot it was never meant to be written down. It was meant to be told.
We Indians among all; should be able to appreciate this most. The best knowledge base we have had transcended generations by word of mouth. The Vedas!
A good story should have a plot, characters, premise, key events and its subsequent course. Tell it with flair, passion and imagination.
Use your metaphor as the context and the base of your vocabulary. I remember reading somewhere that our epics and puranas were used as metaphors for the instructive dialogues in our Vedas and Upanishads.
Architects and product owners, put down your pen, uninstall MS word, walk to the center and tell us a story. 

Saturday, 19 July 2014

Developer testing ( continued)

Developer testing

In the last blog, I discussed about the four traps that discourages developers from testing. Let us consider some alternatives to these.

Test-early-and-often than test later
Explore than cover
Arsenal than one tool
Football (have fun!) than Foosball

As a developer, half of our time, we could spend in coding. The other half of the time, we should spend in testing. This is done best, when we test early and test often. Each of us could keep our styles, may be I would code for a day or two and then test for the next couple of days when you may be switching between the two every couple of hours. 

Developer testing is a deep reflection on what we have coded. Compiling the code is probably the first step in this. I suggest that the developer testing be an exploratory activity. Brave yourself to tread any path in your code fearlessly and be surprised at what you find there. Test the real code, use a stub when you are stuck.

The following excerpt from T S Eliot's "Little Gidding" is an apt metaphor about exploring.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started 
And know the place for the first time


In my opinion, it may not be necessary to regress all your developer test cases each time. For ensuring the sanity of your design, you may consider a subset of test cases as a smoke suite. So this concern need not tie you up with a particular test tool. Build a rich arsenal; templates, data generators, test case generators, whatever. Code and test at will.

Developer testing is a reflective technique. For all you may, you could even stare at your code for half the time. There are no particular rules for developer testing other than those you follow for coding. Have fun at it as much as when you code.


Tuesday, 15 July 2014

Developer testing

Developer testing - an oxymoron?


Developers love coding.
Coding ain't done till it is tested.
Developers hate testing.
.
I have noticed four traps that discourage developers from testing. I have named them (in no particular order) the test-later trap, the coverage trap, the one-tool trap and the foosball trap.

The test-later trap

This is probably the most common trap. In my company, this is probably a residual practice from "our waterfall" days. When code piles up, developer testing becomes daunting and dreary.


The coverage trap

All fixes are not free. Some back fire too. 
Long, long ago (perhaps not so long!), some one had an idea. If we cover (not test, mind you!) all the code that we have written, all is well. 
Which respectable developer would like to do this work?

The one-tool trap

It is not enough that we test all that we code, all those tests need be run again as regression. This would be difficult to achieve if all of us don't use the same tool. If our software is in maintenance, we should probably continue to add tests to the same (sometimes archaic) tool.
Really?

The foosball trap

I love coding because it is like playing football. There are a few rules, but I can always play in my style. I can deftly pass it to the striker near the box or curve it in like the great Italian playmaker. 
When someone says "don't write a line of code until ...", well! Who wants to code or test like that?

So much for complaining.
Any suggestions to happily marry coding and testing? 
See my next post.

If walls could tell stories

If walls could tell stories

My team finishes another stand-up near the story-wall. Some of us lean against it for support, otherwise, the wall is largely neglected. In the first 2 weeks of our sprint, at times, our QA reminds us of updating the cards, moving them to the correct states. From the third week, he also gives up.

Why is it that our story-wall has ended up as a rusty board?
Why are we not feeling motivated to move the cards?

I observe that the story-wall generally does not visually encourage us. A week or two into the sprint, and we have at least as many cards as our team size, on the board. And then, some urgent tasks come in, some tasks take longer than we initially thought to move and we wonder "So many tasks?!"

An agile team structure is some what like a team formation in a game. We also have flamboyant forwards, game controlling midfielders, play makers and some great defenders.

I remember having read some where that many agile methods are influenced by sports. May be that is why it is also suggested that an agile team may be of 6 to 11 members in size. Most team sports have teams of these sizes.

I wonder if we can (re)model our story wall like a sports team formation, say a football (soccer) team formation. We can choose different formation in each sprint to spice it up, like 4-3-3 or 4-4-2. There can be a goal, when a developer moves her card to system test and when the tester moves it to done. A review can be an assist. We can have golden balls & golden boots. The clock can tick with each day progressing. We can let each team member choose their position, may be even personalize it with an avatar or photo. Five days into the sprint and when our score line reads 1-0 at 00:05, we can even break into a jig.

What say?