Monday, March 28, 2016

Opportunistic salvage driven by tests

Taking a break from my recent series of blogs about management, I’d like to discuss something that has come up with my current client - OK, I’m bending my rule to and blog about live events but this isn’t about the client, its about an idea.

Opportunistic salvage

Many readers will have seen this scenario before: there is an existing system, it has been developed under a previous approach, a non-Agile, an approach that didn’t give quality a high priority. Some believe that the new work, based on an Agile approach, can take the current system “as is” and build on it, maybe some patches will be needed but why throw away something that has already been built?

Something customers have? Something you’ve paid for? - they have a point.

And there are other people who believe, perhaps with good reason, the existing system has poor code quality, poor architecture, significant defects (bugs), and in general takes more effort to maintain than it would to rebuild.

As always the truth lies somewhere in between, and as always, determining what the truth is is not easy. The best that can be said is: some parts of the system are good and can be built upon, and some parts of the system bring more costs than they do benefits.

Anyone who has ever owned an old car will know the problem: you take the car to the garage and you are told “Ooo…. this is expensive to fix” - not as expensive as a new car but expensive. Nor it this the first time you’ve taken the car in, this time its the steering, last time it was the electrics, next time… the clutch? the gear box? And maybe there is rust.

You have to make a decision: should you spend the money to keep the car running or invest the money in a new car?

This is the question we face. Our solution is: Opportunistic Salvage.

We will look at each part of the system on a case-by-case basis and decide whether we can bring it across or whether we rebuild.

Critically this is driven by tests, automated acceptance tests to be specific.

Faced with a request, a requirement, we develop a series of acceptance tests - based on the acceptance criteria that go with the requested story.

These tests will be implemented in an automated ATDD fashion using a tool like Cucumber, SpecFlow, FIT or Selenium. We will then take the part of the existing system and run it through the tests.

Two points to note at here:

  • Building an automated acceptance test system is needed for a proper Agile approach. So building this keeps with the new approach. (If you are not automating your tests to build fast feedback loops its unlikely you really are doing Agile anyway.)
  • Some work may be required to make the section(s) of code testable, i.e. make it a module with a well defined interface. If the code is already a well defined module then good; if not then it should be. Going forward, in an Agile approach, requires good code so if an investment is needed then the investment is made.

Now one of two things will happen.

Possibly the lifted code will pass the tests, this is good and shows that the people who said we should build on the existing were right. We can take a quick look inside the code to make sure its good quality but it probably won’t need much work - after all it just passed the tests. If it happens that a bit of work, possibly refactoring is required then since there are now tests this is quite doable.

On the other hand, if the module fails the test… well, it is a good thing we found out now and didn’t leave it till later! (One wonders how the current system is working but lets not worry about that right now.)

The code has failed tests so obviously needs some work. Nobody wants to ship code which is known to be faulty.

Now we look inside the code, and we make a professional judgement: is it better to patch up this code and make do? Part of this judgement included remember the product is expected to continue for some time to come.

These decisions are made on a case-by-case basis. There is no blanket decision to move all the old system, nor is there a blanket decision to write everything from new. What is good is kept, what is OK is patched up and what is bad is thrown away.

The people to make the judgement call are the people who are going to be working on the code: not the managers, not the business analysts, not the project managers, not the product managers, not even the testers. All these people may have a view but there is one group of people who are recruited and employed because of their professional skills with code.

The programmers, aka software engineers, aka software developers.

Given a code module that is failing tests and is expected to have a long life these people need to make a professional decision on whether the better course of action is to patch and make do or to rewrite.

In making this decision we are accurately aware that sometimes writing from scratch can be faster than patching up something that already exists - faster both in the short run and in the long run when ongoing maintenance costs are considered.

This is especially true in software, especially true when the people who originally wrote the code are long gone and especially true when the original code was written without tests.

These decisions are made on a case-by-case basis. Where code can continue it is salvaged and reused. Where isn’t sensible then it isn’t.

So… no debate about legacy or new build: Opportunistic salvage driven by tests.

If those who believe the legacy product provides a solid base are right then there will be little new code written, sure you will need to write some tests but you will want to do that anyway.

If those who believe the legacy product should be rewritten are right then many tests will fail, code will be found to be crummy and it will be replaced.