After reading Minimalism beyond the Nurnberg Funnel I have been thinking what implication these theories could have for software development itself.
First for system design. Minimalist documentation aims at being task oriented. The idea is that the readers (users) are trying to do something and need to solve a problem – how do I? – so they turn to the documentation. What if we apply these ideas to system design?
Well I think we would come up with simplified systems that allowed users to perform tasks. The software would treat users as problem solvers and allow them to combine the tasks in what ever way they like to solve their wider problem. Now this starts to sound like Naked Objects.
Second: what if we turn our attention from user documentation to system documentation? Usually the documentation software developers write to describe the insides of the computer system is based on the Nurnberg funnel principle. We assume that is a software designer writes down everything they think they know about the software then someone who reads it will know the same information – open programmer’s head, insert Nurnberg funnel, pour documentation.... programmer can now understand software!
Obviously this isn’t going to work. This isn’t only true about existing software it is also true about systems we are going to write. Traditional software design (specifically the waterfall model) mandates that Architects and Designers – who in the traditional model are different to Programmers – should “design” a system, document the design and hand the design over to the Programmers so they will know how to turn the design into code.
Again, this is the Nurnberg funnel model and it isn’t going to work. When we follow this model we are lying to ourselves. So what are we to do?
Minimalism probably isn’t an answer in this case. Firstly the potential audience is so small it will be impossible to justify the extra cost. Secondly minimalism relies on describing the tasks people will want to perform. In general we don’t know the tasks that future system developers might want to undertake on our software. Yes we can guess what they might want to do but this really is little more than a guess. I have designed and built systems in the past with specific extension points for future developers – just add a new object here. But these are simple changes. The majority of the changes I’ve done to software involve expanding it in a way that wasn’t thought of in advance.
So while the thinking behind documentation minimalism can show the failings of traditional documentation it cannot provide a solution. We need to look elsewhere. I’m not sure I know the solution but I’ll make one suggestion.
The problem of making software developers familiar with a new code base is not a problem of documentation. Rather it is a problem of knowledge management and specifically knowledge transfer. Documentation is one tool in this arena but not the only one. Traditional documentation relies on a just-in-case model. Managers who are worried about risk have developers document the system just in case something needs to be done. This looks like a strategy to mitigate risk because we have “captured the knowledge” but in fact the risk still exists because the documentation is not very useful.
The alternative model is just-in-time knowledge management. Here we ensure that should a new developer need to know about an existing system they can ask someone who knows when they need to. The knowledge transfer then takes place directly from one developer to another.
Instead of the solution being based on something you have it is based on someone you know. On the face of this it this model looks costly. In order to get the knowledge we need a one-to-one relationship rather than the one-to-many relationship documentation provides. But since documentation does not work this argument does not hold.
In fact, writing documentation that might not be read, and if it is will not be understood represents waste. It is a waste of the developer’s time to write the documentation, it is a waste of paper to print the documentation, it is a waste of space to store the documentation and it is a waste of time to read the documentation. The only purpose it serves it to allow a box to be ticked on a risk form.
Using the just in time model all this waste is eliminated. If, and only if, the knowledge is needed then the cost of knowledge transfer will be incurred. And since the cost will be incurred in the future it is cheaper today. (I probably don’t need to spell this out but just in case: this model matches the ideas of Lean manufacturing, product development and software development.)
However, we are still left with the problem of mitigating the risk. To do this we need to keep track of the programmers who developed the system and have made changes since. This potentially changes the nature of the software development model but in today’s Internet connected world should not be insurmountable.
The problem then is that people, even developers, do forget over time. Still, forgetfulness might not be that much of a problem reality.
There are two types of system: those that are used and those that are not. The ones that are used tend to need changes over time; because these systems are living people think of new features and they need to be changed as the environment changes (think Y2K.)
Then there are the systems that don’t get used. Well if these systems are not used it is unlikely they need to be changed. Obviously for those systems that are being changed there are people who know how they work. For those that don’t change there might not be anyone who knows how they work but neither is there anyone who needs to know.
Of course there will always be the odd exception, a program that runs for 10 years without a change and then needs updating; but how many systems are there like this? Does the existence of a few systems that require very occasional changes justify writing all that documentation for all those systems were is isn’t needed, won’t be read and won’t be understood if it is?