In a recent article published by McKinsey, the authors talk about how application development spend as a percentage of total corporate IT investment increased from 32% in 1990 to almost 60% in 2011. The article then talks about how few organizations have the means of measuring the output of their application development projects, and recommends an output metric called Use-case points (UCPs) that can then be used to measure productivity of IT projects.
I think the post is a bit confusing because in its introduction it mixes two related, but separate, concepts (i.e. measuring team productivity vs. measuring project output). The confusion arises primarily from the use of the word "output" as it relates to projects. The output of an application development project is very different from the output of an application development team. Why is this? Because project output should always be tied to metrics that measure one of the following:
- Brand awareness
A project that does not improve any of the above should not be undertaken in the first place - whether this be a software development project or building an entirely new line of business. Additionally, if a project is undertaken that addresses one or more of these objectives, then the output of the project should be measured against the metrics tied to these objectives. These metrics are usually very tangible and can range from an increase in the number of likes on Facebook, to an increase in revenue per customer, to a reduction in operational churn, or to the minimization of downtime.
Critically, the output of an application development team (measured in UCPs) can be stellar, but the output of the project itself can be abysmal. This can happen due to any number of reasons, not the least of which include:
- Scope risk - if the scope of the project is continuously shifting, then even the most productive team will find it hard to be successful. Managing scope usually ends up being the single most important responsibility of a project manager, and requires strong leadership as well as executive support and credibility (which usually comes with experience).
- Cross team execution risk - application development projects rarely occur in a silo. This is especially true in larger companies, where multiple teams both from within and outside an organization are generally involved. It's also rare that every team involved in the project buys into the project's output objectives, and may, in fact, openly oppose those objectives. Of the 7 reasons listed by IBM as to why projects fail, the top 5 are all, in some way, related to the risks that arise from cross team execution. In this case as well, the role of a strong project manager and executive leadership support of the project manager can prove decisive.
- Market risk - this is critical in software development leading to the launch of a new product or service category. Apple's Newton PDA was released a decade before the PalmPilot, and two decades before the iPhone. The market was just not ready for it. One way of minimizing this risk is by taking the lean start up approach and iterating through ideas. This doesn't necessarily change the market appetite for a product or service, but it gives you a good sense of what the market is willing to accept at a given point in time.
While I agree that measuring the output of an application development team is important (though I am not in 100% agreement with the methods suggested; more on this in a later post), the output of an application development project should be tied to metrics that drive business objectives.