Wednesday, March 23, 2011

Final roundup of facts from Capers Jones

In two previous entries I’ve reported some interesting statistics and findings - possibly facts - from Capers Jones book Applied Software Measurement (see Software Facts - well, numbers at least and More Facts and Figures from Capers Jones). I want to finish off with a few notes from the later chapters of the book.

On packaged software
  • Modifications to existing COTS package - I assume he includes SAP and Oracle here - is high-risk with low productivity
  • When changes exceed 25% it may be cheaper to write from scratch. (I’m not sure what it is 25% of, total size of the package, 25% of function points, 25% of features?)
  • Packages over 100,000 function points (approximately 1m lines of code) usually have poor bug removal rates, below 90%
On management
  • Jones supports my frequent comment that comes redundancies and downsizing Test and QA staff are among the first to be let go
  • Projects organised using matrix management have a much higher probability of being cancelled or running out of control
  • Up to 40%(even 50%) of effort is unrecorded in standard tracking systems
On defects/fault
  • “Defect removal for internal software is almost universally inadequate”
  • “Defect removal is often the most expensive single software activity”
  • WIthout improving software quality it is not possible to make significant improve to productivity
  • “Modifying well-structured code is more than twice as productive as modifying older unstructured software” - when he says this I assume he doesn’t mean “structured programming” but rather “well designed code”
  • Code complexity is more likely to be because of poorly trained programmers rather than problem complexity
  • Errors tend to group together, some modules will be very buggy, others relatively bug free
  • Having specialist, dedicated, maintenance developers is more productive than giving general developers maintenance tasks. Interleaving new work and fixes slows things down
  • Each round of testing generally finds 30-35% of bugs, design and code reviews often find over 85%
  • Unit testing effectiveness is more difficult to measure than other forms of testing because developers perform this themselves before formal testing cuts in. From the studies available it is a less effective form of testing with only about 25% of defects found this way.
  • As far as I can tell, the “unit testing” Jones has examined isn’t of the Test Driven Development type supported by continuous integration and automatic test running. Such a low figure doesn’t seem consistent with other studies (e.g. the Nagappan, Maximilien, Bhat and Williams study I discussed in August last year.)
  • Formal design and code reviews are cheaper than testing.
  • SOA will only work if quality is high (i.e. few bugs)
  • Client-server applications have poor quality records, typically 20% more problems than mainframe applications
Documentation
  • For a typical in-house development project paperwork will be 25-30% of the project cost, with about 30 word of English for every line of code
  • Requirements are one of the chief sources of defects - thus measuring “quality” as conformance to requirements is illogical
Agile/CMM/ISO
  • There is no evidence that companies adopting ISO 9000 in software development have improved quality
  • Jones considers ISO 9000, 9001, 9002, 9003 and 9004 to be subjective and ambiguous
  • Below 1,000 function points (approximately 10,000 lines of code) Agile methods are the most productive
  • Above 10,000 function points the CMMI approach seems to be more productive
  • I wold suggest that as time goes by Agile is learning to scale and pushing that 1,000 upwards
Jones also makes this comment: “large systems tend to be decomposed into components that match the organizational structures of the developing enterprise rather than components that match the needs of the software itself.”

In other words: Conway’s Law. (See also my own study on Conway’s Law.) Its a shame Jones missed this reference, given how well the book is referenced on the whole I’m surprised.

Elsewhere Jones is supportive of code reuse, he says successful companies can create software with as much as 85% reused code. This surprises me, generally I’m a skeptical of code reuse. I don’t disbelieve Jones, I’d like to know more about what these companies do. As has to be more about the organisational structure than just telling developers: “write reusable code”.

Overall the book is highly recommended although there are several things I would like to see improved for the next revision.

First, Jones does repeat himself frequently - sometimes exactly the same text. Removing some of the duplication would make for a shorter book.

Second, as noted above, Jones has no numbers on how automated unit testing, i.e. Test Driven Development and similar, stacks up against traditional unit testing and reviews. I’d like to see some numbers here. Although to be fair it depends on Jone’s clients asking him to examine TDD.

Finally, Jones is very very keen on function points as a measurement tool. I agree with him, lines of code is silly, the arguments for function points are convincing. But, I’m not convinced his definition of function points is the right one, primarily because it doesn’t account for algorithmic logic.

In my own context, Agile, I’d love to be able to measure function points. Jones rails against Agile teams for not counting function points. However, counting function points is expensive. Until it is cheap and fast Agile teams are unlikely to do it. Again, there is little Jones can do directly to fix this but I’d like him to examine the argument.

I want to finish my notes on Jones book with what I think is his key message:

“Although few managers realize it, reducing the quantity of defects during development is, in fact, the best way to shorten schedules and reduce costs.”

No comments:

Post a Comment

Note: only a member of this blog may post a comment.