Tuesday, February 22, 2011

The Agile 10 Step Requirements Model

I’ve been talking about the Agile 10 Step Requirements Model for a year or so now. It has appeared in a few conference presentation and will be at the centre of my ACCU 2011 conference presentation. But I’ve not written about it.

Now I have. This months ACCU Overload has a short overview of the model. I say short because each time I’ve sat down to write down the model it has got very long. This was a deliberate attempt to keep it short.

You can ready it for free in Overload, or I’ve put the article itself on my website, The Agile 10 Step Model.

Many thanks to Ric Parkin and the Overload team.

Sunday, February 20, 2011

How does Agile relate to CMM Level 5?

A question in my mail box: “How does Agile relate to CMM Level 5?”

As I started to tap out the answer I thought: this might as well be a blog entry. So here it is.

Think of CMM, or rather CMMI which replaced CMM about 10 years ago, as a ruler. It is a way of measuring software development effectiveness. Is your development process 1cm good? 2cm good? or 5cm?

CMMI doesn't say how you should do development, only how you should judge effectiveness. Agile is a way of doing things - as opposed to "the state of Agile" which is a measure of effectiveness.

You can use Agile to be any CMMI level you like. CMMI doesn't care how you do it. Watts Humphrey (the father of CMM) used to say something along the lines of “Work on you quality and processes first, the levels will come by themselves.” That statement is very Agile. Unfortunately some organisations get it the wrong way around. I was at Reuters when they imposed CMM and destroyed a big chunk of their development capability.

The late Watts Humphrey did issue some process recommendations in the form of the Personal Software Process and Team Software Process.

That’s the easy bit.

"The state of being Agile" might, or might not, conflict with Level 5 CMMI simply because different things are considered important. You might be CMMI level 5 and decidedly unAgile.

Ironically CMMI level 5 might mean you have to investigate Agile. Because being level 5 means you are “self optimising”. If an organisation is level 5 and hasn’t looked at Agile it should because that may help them improve.

Paradoxically, being level 5 itself makes it becomes harder to improve. The risk of change is a lot greater because the organisation has more to loWaterfallse - and probably lots of procedures to update making any change expensive.

CMM(I) tends to be associated with a certain way of doing things. Partly this is historic: when CMM(I) appeared Waterfall was the dominant model, NASA was the first organization to reach level 5 and (at the time) it was very Waterfall based, and because CMM(I) tends to be more common in military work which are also big and paper work intensive.

So, some people believe that CMM(I) means following a very structured, heavy, WaterfallWaterfall process. It doesn’t have to mean this but historically the two do tend to coincide.

The Software Engineering Institute issued a report in 2008 which discussed how CMMI and Agile could work together: CMMI or Agile: Why not embrace both! I don’t agree with everything in the report but I do agree with the general tone. CMMI and Agile are not alternatives and can be complementary.

Tuesday, February 08, 2011

Friday, February 04, 2011

Offshoring: not as simple as its often reported

A couple of footnotes to my last blog post on Capers Jones book, Applied Software Measurement. One of the points I noted was Jones suggestion that rising prices and costs in India and elsewhere would mean IT offshoring loose its financial benefits from about 2015 onwards.

Jones, by the way, later says that Indian and Chinese outsources are aiming to compete on quality not price in the long run. So we can expect to see the nature of offshoring change.

Shortly after posting that blog entry I noticed a piece in The Economist on the Indian ID programme, Identifying a billion Indians (27 January, subscription required.) The Indian Government is embarked on a scheme to give unique identity numbers to the entire population.

The first thing that first got my attention was not that this was an IT based scheme (no surprise there) but who was doing it. Not Tata (TCS), not Wipro, not Infosys. It is Accenture and L-1 of the US and Morpho of France. I’m sure much of the work is being done in India but the Indian Government has given the work to foreign firms. Lets give a round of applause to the Indian Government for not feeling compelled to give work to local firms.

And then lets note that offshoring/outsourcing goes both ways. Its not all about US and Europe loosing out to India or China. It goes the other way too.

The second thing that got my interest was this: the work is split between these three firms: 50%, 30%, 20%. Progress is reviewed regularly and work reallocated. The most effective firm during any one period gets the bulk of the work next time. That is a very enlightened way of working and gives real incentives the firms to do their best. It also shows flexibility with contracts.

Separately, some US readers might be please to learn that Financial Times reports that the big Indian outsources plan to do less work with US firms. European readers won’t be so happy to learn that Europe will replace the US as their target.

The reason: not competition, not prices or quality. US Visa prices.

Wednesday, February 02, 2011

More facts and figures from Capers Jones

I continue my reading of Capers Jones Applied Software Measurement as discussed a few entires ago - Software Facts and I’d like to report some more of Jones findings.

These numbers are very insightful but, and its a but Jones acknowledges, the data is very shaky. As Jones says “software measurement resembles an archeological dig. One shifts and examines large heaps of rubble, and from time to time finds a significant artifact.”

He readily admits that there is not really enough data to make solid conclusions (less than 1% of the data needed is available) but this is where we are at. This is as good as it gets. Before anyone rushes to say “Capers Jones is wrong” ask yourself: “Do I have better data to make conclusions one?”

I find two weaknesses in Jones work. Firstly his reliance on Function Points. I agree with him that lines of code is not a good measure but I have some doubts about function points for two reasons.
  • They do not incorporate algorithm complexity
  • Automating function point counting is difficult and becomes an approximation. Thus true function point counts are expensive which makes them difficult to work to calculate often
But, function points, for all their faults seem to work.

My second issue with Jones work concerns his assumption of the waterfall model. Yes he acknowledges Agile, he even says it is the most productive approach in some circumstances but all his data and assumptions are cut through with the waterfall. I suppose this is only natural, it was (is) the dominant model in the industry for 30 years or more.

OK, on with some numbers and conclusions, again these are almost entirely US based but are probably very similar elsewhere....

  • Jones repeatedly states and shows how quality and productivity are related. The most productive teams have the lowest bug counts and shortest schedules.
  • The three biggest costs on a software project, in order: Rework (Bug fixing), paperwork and meetings & communication. Sometimes managerial costs are actually greater than coding costs.
  • Jones states as little know fact “excessive staffing may slow things down rather than speed things up”
  • Inspections (code, design, requirements) are the most efficient known ways of preventing problems.
  • Studies show that some developers are 20 times more productivity than others, and some make 10 times as many errors as others.
  • Paperwork (requirements, technical specs, etc.) on the largest systems can be larger than one person’s ability to read during an entire career. (Think about that: you join a new project at 21, by the time you hit retirement you haven’t finished reading the documentation.)
  • Most corporate effort (time and money) tracking systems are incorrect and miss between 30% and 70% of the real effort.
  • Thus this data is essentially useless for benchmarking, creates cost over runs because they are so inaccurate and is actually dangerous because it puts future projects at risk when the data is used for estimates. (Before you question this statement go and check, does your tracking system measure unpaid overtime?)
  • “More than half of large projects are scheduled irrationally” - a delivery date is set external to the development group without reference to capabilities.
  • Over 50% of large (more than 10,000 function points) projects are cancelled and the rest are almost always late and over budget.
  • In 2008 at least 25% of new projects used some elements of Agile.
  • Maintenance is now the dominant form of software development and is likely to stay that way.
  • There are several Y2K like problems likely to occur in the next 40 years.
  • Even waterfall projects overlap phases - typically 25% of any phase is incomplete when the next starts. This rises to 50% when the handover from design to coding is considered.
  • It is even difficult to determine when a project really starts - less than 1% have certain start dates - and 15% have ambiguous end dates.
Outsourced & Offshore
  • Outsourced software development companies are generally better than their clients and have a better record of delivering large systems.
  • Offshore work tends to be less product than onshore work in the US - that does not mean it is not cost effective but it is less productive in terms of function points per man/time period
  • Inflation and other cost changes mean that by 2015 the economic advantages of offshoring development work are likely to be gone. (Considering that it takes time to transfer work and offshore teams to become productive it is likely that very soon it will not make sense to send work overseas.)
  • Worryingly Jones reports that 75% of companies have deficient requirements practices. Since offshore work is particularly dependent on good requirements I would expect offshore projects to be more troublesome in this regard.
Standards and Quality
  • Jones finds no solid evidence that ISO 9000 improves quality or the ability to deliver; almost the opposite it does increase the cost of projects because it increases the amount of paperwork. Ironically some ISO 9000 advocates have defended the approach to him on the grounds that ISO 9000 is not designed to improve quality! (Sounds like many people have been misled over the years)
  • Military systems are the largest systems in the world, some over 350,000 function points. They are also the most expensive and usually paperwork driven.
  • Department of Defense (US) standards do not seem to improve software quality, they have sometimes been impediments and have certainly raised costs.
  • Microsoft, Oracle and SAP are singled out as having very poor quality control.
  • Interestingly he also suggests that once 25% of a large COTS application needs to be customised it is less expensive to build the same application from scratch.
Jones says that as of 2008 there were 700 known programming languages with one a month appearing. The average life of a language is 15 years but the average life of a major software system is 20 years. These figures might be a little misleading, C is now nearly 40 years old, other languages survive only a few. But the point is clear: there are lots of dead languages and orphaned languages out there (over 500 Jones thinks.)

As noted before, most schedules are planned with an externally determined date. Jones gives this as the number one reason for project failure - number two is excessive schedule pressure which is very similar. Later he proposed schedules are so bad as to constitute professional malpractice.

This is interesting. One the one hand businesses need those deadlines. On the other, if projects were left to run their natural duration would the failure rate be so high?

He also comments that US development teams tend to be strong on technical skills but are let down by weak management and organisational issues. Later in the book he attributes much of this down to western managements pre-occupation with cost control which he contrasts with eastern concern with quality.

This argument seems reasonable but I’m not sure it stands up to analysis. Although it might explain why Scrum can be “successful” even when technical practices are ignored.

In my experience US and European teams don’t always have the technical skills they should have - certainly talk to Steve Freeman or Jason Gorman and you will hear some horror stories. While India has more CMM certified organisations than anywhere else my personal experience is, while India has some very good developers the IT bubble has attracted in not just second rate developers but third rate ones as well. Its also worth pointing out that Japan isn’t a software powerhouse, Toyota even struggle to apply Lean thinking to software.

Jones believes, from his studies, that management is the greatest contributor to success and failure at the project and company level. In some ways that is reassuring, it shows that there might be something in good management.

Interestingly Jones also supports one observation I have frequently made myself: when redundancies occur Test and QA staff are usually the first to be let go, and this is counter productive in anything except the very short tun.

I’ll continue reading and maybe blog some more data. In general, the Applied Software Measurement has a tendency to repeat itself and I wish he used more graphs for his data but I still think it is worth reading and I recommend it.