Kill it With Fire Friday

I previously wrote about an idea I had for managing technical debt in the face of continuous and inevitable business pressures. I want to take a moment to follow up with some of the comments I’ve received on the technique, how it evolved, and where it proved strong/weak.

Feedback

The one piece of direct feedback I got externally was “why aren’t you just doing pair programming?” It’s a fair question. The short answer is: I think both should be done! There’s definitely some overlap: Pair programming helps with transferring domain knowledge amongst team members, for example.

Pair programming can also help reduce technical debt on a local scale, which in aggregate can help keep it from getting out of control at a global scale.

The weaknesses in pair programming by itself are that you don’t wind up pairing across certain layers of the stack – front-end and back-end engineers don’t often pair together very much, and it doesn’t create an opportunity to sit back and consider bigger-picture questions about development workflow, toolchains, etc.

Some of this will become more clear in the section about what worked.

Things That Didn’t Work

  1. Homework: This proved to be a bad idea. Business realities being what they are, people just couldn’t squeeze it in.
  2. Purely Free-Form Format: People often were so focused on their assigned tasks that it was literally difficult for them to figure out what to look at to make significant, meaningful improvements in the system.
  3. Chores: Our test suite was hit-or-miss in terms of both coverage and quality for our main codebase. Reviewing test cases to ensure that they were readable in a relatively context-free manner, and captured intent effectively thus became a difficult objective.

Changes That Happened

The biggest change I made was to change the format from 1 hour per week, plus 1 hour of ‘homework’ to 2.5 hours per week and no 'homework’.

The breakdown of the 2.5 hours was roughly:

  1. 15 minutes: Quick review of automated stats (changes in code coverage, etc).
  2. 30 minutes: Quick discussion about who wants to do what.
  3. 1.5 hours: Actual coding, with people being able to ask for help, etc right there on the spot.
  4. 15 minutes: Merge each others’ branches. Key thing was to never merge your own branch.

Things That Did Work

In particular, after the above changes, we simply started getting more done.

We also started seeing a better transfer of understanding across functional boundaries – our main front-end coder, finally receiving some help on setting up the relevant infrastructure and feedback on how his test cases were being written began to have the key a-ha moments about how to write code to be testable (reducing the overhead), the value of TDD, and so forth. Previously, the sheer complexity of establishing a front-end TDD environment in a Rails app with the asset pipeline had made that a high barrier to overcome.

He even went on to start pushing for more parity between front-end and back-end tooling, such as coverage analysis for the Javascript code.

I consider that sort of acceleration of learning and expertise to be a huge win.

What I Would Do Differently Next Time

I would automate the stats collection, and broaden it, to help bring quantifiability to the foreground. I would want to begin every session with a review of how things changed during the week, and end every session with how things had changed in the session.

Coverage. Complexity. Any metric which can be meaningfully consumed – I’d want to bring together into a report generated at the push of a button that told people right away how much good they did.

Obviously many things are difficult to quantify, such as workflow improvements, but even there, before/after comparisons could be useful (whether it’s benchmarking how long commands take, comparing how discoverable workflows/steps are, or counting how many steps are needed for key tasks).

I would also want to find ways to give the participants a greater sense of agency in the process, and make it so they could come to the table with a stronger agenda, rather than just aimlessly searching for tasks.

assumptions, cloudability 688 words, est. time: 137 seconds.

« Cloudability Looking for Senior Data Developer The AAIA ACES Datasets »

Comments

Copyright © 2016 - Jon Frisby - Powered by Octopress