Why Software Projects Fail
This topic is inspired by the fact that a high number of software projects fail. And most of them fail because of errors in requirement definition.
In this page, we will delve into this subject matter and come up with some explanations and suggestions.
The Consequences of not Defining Requirements Properly
According to the Standish Group, only 16% of software projects are completely successful. 53% finish late, over budget or lacking functions. 31% are abandoned. They find that requirement errors account for 69% of the failures and management and resource type issues account for the rest.
These problems are exacerbated in larger projects. If your software has more than 10,000 function points, it has a 65% chance of cancellation, according to Software Productivity Research Incorporated. This goes down to 25% if you have less than 5,000 function points. In one year, 285,000 years of work effort were written off in the organizations studied by Software Productivity Research Inc. If you apply a modest hourly rate of $50 to this, you get a write-off of $20 billion, just in the subject companies.
To add insult to injury, the cost of resolving defects rises exponentially the later in the software life cycle they are detected. These are the comparative magnitudes of resolving defects at the various stages:
|During unit testing||2.0|
|During acceptance testing||5.0|
|After live rollout||20.0|
In other words, it costs two hundred times as much to fix something that is discovered after live rollout than something that is discovered during requirements gathering. This is because of the rippling effect of all the people and processes that are added to the equation the further you travel along the road.
What's so Difficult about Defining Requirements?
There are some classic explanations for the failure to define requirements properly, some of which may be unnervingly familiar.
Difficulty of thinking in the abstract
This problem is particularly significant in ground-up projects where new functions are being created.
Inability to predict the future
This refers to the need to predict future markets, regulations, technologies, etc and is very tricky in the context of system requirements.
Techs want to dive straight into the code
This is where the technicians think they have got the gist of what is required and the best way to proceed is to start building something. This is a bit like trying to put up an office tower without architectural drawings and would never be contemplated in the construction industry. So why do we do it in IT?
Deadline/budget pressures force you to gloss over requirements
This is a bit like the previous issue but for different reasons. There are two points to make here. Firstly, all the evidence shows that cost cutting during the requirements phase is suicide and is to be resisted strenuously. Secondly, it's harder to speed up analysis than it is to speed up development, so requirements studies are susceptible to deadline pressure.
New technology in search of an application
In this case, the 'real' requirement is to try new gadgets. Therefore the business and functional requirements are likely to be somewhat aimless.
What we're suggesting here is that maybe the reason for overruns or delays is that the project was estimated wrongly in the first place. Software estimation is a thorny issue, but it falls outside the scope of this article.
User versus developer
This refers to the problem of requiring business and technical people to communicate effectively on a host of issues which are understood by one camp or the other but not both.
No need for analysis; we're buying a package
Wrong! It doesn't matter whether you're building or buying. You still have to understand your requirements beforehand. Conventional wisdom on this is that you should spend the same on requirements either way.